1000+ DevOps Shell Scripts and Advanced Bash environment.
Fast, Advanced Systems Engineering, Automation, APIs, shorter CLIs, etc.
Heavily used in many GitHub repos, dozens of DockerHub builds (Dockerfiles) and 600+ CI builds.
- Scripts for many popular DevOps technologies, see Inventory below for more details
- Advanced configs for common tools like Git, vim, screen, tmux, PostgreSQL psql etc...
- CI configs for most major Continuous Integration products (see CI builds page)
- CI scripts for a drop-in framework of standard checks to run in all CI builds, CI detection, accounting for installation differences across CI environments, root vs user, virtualenvs etc.
- API scripts auto-handling authentication, tokens and other details to quickly query popular APIs with a few keystrokes just supplying the
/path/endpoint
- Advanced Bash environment -
.bashrc
+.bash.d/*.sh
- aliases, functions, colouring, dynamic Git & shell behaviour enhancements, automatic pathing for installations and major languages like Python, Perl, Ruby, NodeJS, Golang across Linux distributions and Mac. See .bash.d/README.md - Installs the best systems packages -
AWS CLI,
Azure CLI,
GCloud SDK,
Digital Ocean CLI,
Terraform,
Terragrunt,
GitHub CLI,
Kubernetes
kubectl &
kustomize,
Helm,
eksctl,
Docker-Compose,
jq
and many others... extensive package lists for servers and desktops for most major Linux distributions package managers and Mac
install/
- contains many installation scripts for popular open source software and direct binary downloads from GitHub releasesconfigs/
- contains many dot configs for common technologies like ViM, top, Screen, Tmux, MySQL, PostgreSQL etc.setup/
- contains setup scripts, package lists, extra configs, Mac OS X settings etc.
- Utility Libraries used by many hundreds of scripts and builds across repos:
.bash.d/
- interactive librarylib/
- scripting and CI library
- SQL Scripts - 100+ scripts for PostgreSQL, MySQL, AWS Athena + CloudTrail, Google BigQuery
- Templates - templates for common programming languages and build configs
- Kubernetes Configs - Kubernetes YAML configs for most common scenarios, including Production Best Practices, Tips & Tricks
See Also: similar DevOps repos in other languages
Hari Sekhon
Cloud & Big Data Contractor, United Kingdom
(ex-Cloudera, former Hortonworks Consultant)
(you're welcome to connect with me on LinkedIn)
To bootstrap, install packages and link in to your shell profile to inherit all configs, do:
curl -L https://git.io/bash-bootstrap | sh
- Adds sourcing to
.bashrc
/.bash_profile
to automatically inherit all.bash.d/*.sh
environment enhancements for all technologies (see Inventory below) - Symlinks
.*
config dotfiles to$HOME
for git, vim, top, htop, screen, tmux, editorconfig, Ansible, PostgreSQL.psqlrc
etc. (only when they don't already exist so there is no conflict with your own configs) - Installs OS package dependencies for all scripts (detects the OS and installs the right RPMs, Debs, Apk or Mac HomeBrew packages)
- Installs Python packages
- Installs AWS CLI
To only install package dependencies to run scripts, simply cd
to the git clone directory and run make
:
git clone https://github.com/HariSekhon/DevOps-Bash-tools bash-tools
cd bash-tools
make
make install
sets your shell profile to source this repo. See Individual Setup Parts below for more install/uninstall options.
- Linux & Mac - curl OAuth / JWT, LDAP, find duplicate files, SSL certificate get/validate, URL encoding/decoding, Vagrant, advanced configurations:
.bashrc
,.bash.d/*.sh
,.gitconfig
,.vimrc
,.screenrc
,.tmux.conf
,.toprc
,.gitignore
...
- AWS - Amazon Web Services - AWS account summary, lots of IAM reports, CIS Benchmark config hardening, EC2, ECR, EKS, Spot termination, S3 access logging, KMS key rotation info, SSM, CloudTrail, CloudWatch billing alarm with SNS notification topic and subscription for email alerts
- GCP - Google Cloud Platform - massive GCP auto-inventory, scripts for GCE, GKE, GCR, Secret Manager, BigQuery, Cloud SQL, Cloud Scheduler, Terraform service account creation
- Kubernetes - massive Kubernetes auto-inventory, cluster management scripts & tricks
- Docker - Docker API, Dockerhub API, Quay.io API scripts
- Databases - fast CLI wrappers, instant Docker sandboxes (PostgreSQL, MySQL, MariaDB, SQLite), SQL scripts, SQL script testers against all versions of a DB, advanced
.psqlrc
- Data - data tools, converters and format validators for Avro, Parquet, CSV, JSON, INI / Properties files (Java), LDAP LDIF, XML, YAML
- Big Data & NoSQL - Kafka, Hadoop, HDFS, Hive, Impala, ZooKeeper, Cloudera Manager API & Cloudera Navigator API scripts
- Git - GitHub, GitLab, Bitbucket, Azure DevOps - scripts for Git local & mirror management, GitHub, GitLab & BitBucket APIs
- CI/CD - Continuous Integration / Continuous Delivery - API scripts & build pipeline configs for most major CI systems:
- Jenkins, Concourse, GoCD, TeamCity - one-touch boot & build
- Azure DevOps Pipelines, GitHub Actions Workflows, GitLab CI, BitBucket Pipelines, AppVeyor, BuildKite, Travis CI, Circle CI, Codefresh, CodeShip, Drone.io, Semaphore CI, Shippable ...
- Terraform Cloud, Octopus Deploy
- Checkov / Bridgecrew Cloud
- AI & IPaaS - OpenAI (ChatGPT), Make.com
- Internet Services - Cloudflare, DataDog, Digital Ocean, Kong API Gateway, GitGuardian, Jira, NGrok, Traefik, Pingdom, Wordpress
- Java - Java utilies to debug running Java programs or decompile Java JAR code for deeper debugging
- Python - Python utilities & library management
- Perl - Perl utilities & library management
- Golang - Golang utilities
- Media - MP3 metadata editing, grouping and ordering of albums and audiobooks, mkv/avi to mp4 converters, YouTube channel download
- Spotify - 40+ Spotify API scripts for backups, managing playlists, track deduplication, URI conversion, search, add/delete, liked tracks, followed artists, top artists, top tracks etc.
- More Linux & Mac - more systems administration scripts, package installation automation
- Builds, Languages & Linting - programming language, build system & CI linting
- Templates - Templates for AWS, GCP, Terraform, Docker, Jenkins, Cloud Build, Vagrant, Puppet, Python, Bash, Go, Perl, Java, Scala, Groovy, Maven, SBT, Gradle, Make, GitHub Actions, CircleCI, Jenkinsfile, Makefile, Dockerfile, docker-compose.yml etc.
- Kubernetes Configs - Kubernetes YAML configs for most common scenarios, including Production Best Practices, Tips & Tricks
Top-level .bashrc
, bin/
, .bash.d/
and configs/
directories:
.*
- dot conf files for lots of common software eg. advanced.vimrc
,.gitconfig
, massive.gitignore
,.editorconfig
,.screenrc
,.tmux.conf
etc..vimrc
- contains many awesome vim tweaks, plus hotkeys for linting lots of different file types in place, including Python, Perl, Bash / Shell, Dockerfiles, JSON, YAML, XML, CSV, INI / Properties files, LDAP LDIF etc without leaving the editor!.screenrc
- fancy screen configuration including advanced colour bar, large history, hotkey reloading, auto-blanking etc..tmux.conf
- fancy tmux configuration include advanced colour bar and plugins, settings, hotkey reloading etc.- Git:
.gitconfig
- advanced Git configuration.gitignore
- extensive Git ignore of trivial files you shouldn't commit- enhanced Git diffs
- protections against committing AWS secret keys or merge conflict unresolved files
.bashrc
- shell tuning and sourcing of.bash.d/*.sh
.bash.d/*.sh
- thousands of lines of advanced bashrc code, aliases, functions and environment variables for:- Linux & Mac
- SCM - Git, Mercurial, Svn
- AWS
- GCP
- Docker
- Kubernetes
- Kafka
- Vagrant
- automatic GPG and SSH agent handling for handling encrypted private keys without re-entering passwords, and lazy evaluation to only prompt key load the first time SSH is called
- and lots more - see .bash.d/README for a more detailed list
- run
make bash
to link.bashrc
/.bash_profile
and the.*
dot config files to your$HOME
directory to auto-inherit everything
lib/*.sh
- Bash utility libraries full of functions for Docker, environment, CI detection (Travis CI, Jenkins etc), port and HTTP url availability content checks etc. Sourced from all my other GitHub repos to make setting up Dockerized tests easier.install/install_*.sh
- various simple to use installation scripts for common technologies like AWS CLI, Azure CLI, GCloud SDK, Terraform, Ansible, MiniKube, MiniShift (Kubernetes / Redhat OpenShift/OKD dev VMs), Maven, Gradle, SBT, EPEL, RPMforge, Homebrew, Travis CI, Circle CI, AppVeyor, BuildKite, Parquet Tools etc.login.sh
- logs to major Cloud platforms if their credentials are found in the environment, CLIs such as AWS, GCP, Azure, GitHub... Docker registries: DockerHub, GHCR, ECR, GCR, GAR, ACR, Gitlab, Quay...clean_caches.sh
- cleans out OS package and programming language caches - useful to save space or reduce Docker image sizedelete_duplicate_files.sh
- deletes duplicate files with (N) suffixes, commonly caused by web browser downloads,download_url_file.sh
- downloads a file from a URL using wget with no clobber and continue support, or curl with atomic replacement to avoid race conditions. Used bygithub/github_download_release_file.sh
,github_download_release_jar.sh
, andinstall/download_*_jar.sh
dump_stats.sh
- dumps common command outputs to text files in a local tarball. Useful to collect support information for vendor support cases in the given or current directory. Checks they're exact duplicates of a matching basename file without the (N) suffix with the exact same checksum for safety. Prompts to delete per file. To auto-accept deletions, doyes | delete_duplicate_files.sh
. This is a fast way of cleaning up your~/Downloads
directory and can be put your user crontabcurl_auth.sh
- shortenscurl
command by auto-loading your OAuth2 / JWT API token or username & password from environment variables or interactive starred password prompt through a ram file descriptor to avoid placing them on the command line (which would expose your credentials in the process list or OS audit log files). Used by many other adjacent API querying scriptsfind_duplicate_files*.sh
- finds duplicate files by size and/or checksum in given directory trees. Checksums are only done on files that already have matching byte counts for efficiencyfind_broken_links.sh
- find broken links with delays to avoid tripping defensesfind_broken_symlinks.sh
- find broken symlinks pointing to non-existent files/directorieshttp_duplicate_urls.sh
- find duplicate URLs in a given web pageimage_join_stack.sh
- stack joins two images after matching their widths so they align correctlyldapsearch.sh
- shortensldapsearch
command by inferring switches from environment variablesldap_user_recurse.sh
/ldap_group_recurse.sh
- recurse Active Directory LDAP users upwards to find all parent groups, or groups downwards to find all nested users (useful for debugging LDAP integration and group-based permissions)log_timestamp_large_intervals.sh
- finds log lines whose timestamp intervals exceed the given number of seconds and outputs those log lines with the difference between the last and current timestamps. Useful to find actions that are taking a long time from log files such as CI/CD logsmac_diff_settings.sh
- takes before and after snapshots of UI setting changes and diffs them to make it easy to finddefaults
keys to add tosetup/mac_settings.sh
to save settingsmac_iso_to_usb.sh
- converts a given ISO file to a USB bootable image and burns it onto a given or detected inserted USB driveorganize_downloads.sh
- moves files of well-known extensions in the$HOME/Downloads
directory older than 1 week to capitalized subdirectories of their type to keep the$HOME/Downloads/
directory tidycopy_to_clipboard.sh
- copies stdin or string arg to system clipboard on Linux or Macpaste_from_clipboard.sh
- pastes from system clipboard to stdout on Linux or Macpaste_diff_settings.sh
- Takes snapshots of before and after clipboard changes and diffs them to show config changespldd.sh
- parses/proc
on Linux to show the runtime.so
loaded dynamic shared libraries a program pid is using. Runtime equivalent of the classic staticldd
command and because the systempldd
command often fails to attach to a processrandom_select.sh
- selects one of given args at random. Useful for sampling, running randomized subsets of large test suites etc.shields_embed_logo.sh
- base64 encodes a given icon file or url and prints thelogo=...
url parameter you need to add the shields.io badge urlshred_file.sh
- overwrites a file 7 times to DoD standards before deleting it to prevent recovery of sensitive informationshred_free_space.sh
- overwrites free space to prevent recovery of sensitive information for files that have already been deletedsplit.sh
- split large files into N parts (defaults to the number of your CPU cores) to parallelize operations on themssh_dump_stats.sh
- uses SSH anddump_stats.sh
to dump common command outputs from remote servers to a local tarball. Useful for vendor support casesssh_dump_logs.sh
- Uses SSH to dump logs from server to local text files for uploading to vendor support casesssl_get_cert.sh
- gets a remotehost:port
server's SSL cert in a format you can pipe, save and use locally, for example in Java truststoresssl_verify_cert.sh
- verifies a remote SSL certificate (battle tested more feature-rich versioncheck_ssl_cert.pl
exists in the Advanced Nagios Plugins repo)ssl_verify_cert_by_ip.sh
- verifies SSL certificates on specific IP addresses, useful to test SSL source addresses for CDNs, such as Cloudflare Proxied sources before enabling SSL Full-Strict Mode for end-to-end, or Kubernetes ingresses (see alsocurl_k8s_ingress.sh
)urlencode.sh
/urldecode.sh
- URL encode/decode quickly on the command line, in pipes etc.urlopen.sh
- opens the given URL from first arg or stdin, or first URL found in a given file. Uses the system's default browservagrant_hosts.sh
- generate/etc/hosts
output from aVagrantfile
vagrant_total_mb.sh
- calculate the RAM committed to VMs in aVagrantfile
See also Knowledge Base notes for Linux and Mac.
mysql/
, postgres/
, sql/
and bin/
directories:
- sql/ - 100+ SQL scripts for PostgreSQL, MySQL, Google BigQuery and AWS Athena CloudTrail logs integration
sqlite.sh
- one-touch SQLite, starts sqlite3 shell with sample 'chinook' database loadedmysql*.sh
- MySQL scripts:mysql.sh
- shortensmysql
command to connect to MySQL by auto-populating switches from both standard environment variables like$MYSQL_TCP_PORT
,$DBI_USER
,$MYSQL_PWD
(see doc) and other common environment variables like$MYSQL_HOST
/$HOST
,$MYSQL_USER
/$USER
,$MYSQL_PASSWORD
/$PASSWORD
,$MYSQL_DATABASE
/$DATABASE
mysql_foreach_table.sh
- executes a SQL query against every table, replacing{db}
and{table}
in each iteration eg.select count(*) from {table}
mysql_*.sh
- various scripts usingmysql.sh
for row counts, iterating each table, or outputting clean lists of databases and tables for quick scriptingmysqld.sh
- one-touch MySQL, boots docker container + drops in tomysql
shell, with/sql
scripts mounted in container for easy sourcing eg.source /sql/<name>.sql
. Optionally loads sample 'chinook' database- see also the SQL Scripts repo for many more straight MySQL SQL scripts
mariadb.sh
- one-touch MariaDB, boots docker container + drops in tomysql
shell, with/sql
scripts mounted in container for easy sourcing eg.source /sql/<name>.sql
. Optionally loads sample 'chinook' databasepostgres*.sh
/psql.sh
- PostgreSQL scripts:postgres.sh
- one-touch PostgreSQL, boots docker container + drops in topsql
shell, with/sql
scripts mounted in container for easy sourcing eg.\i /sql/<name>.sql
. Optionally loads sample 'chinook' databasepsql.sh
- shortenspsql
command to connect to PostreSQL by auto-populating switches from environment variables, using both standard postgres supported environment variables like$PG*
(see doc) as well as other common environment variables like$POSTGRESQL_HOST
/$POSTGRES_HOST
/$HOST
,$POSTGRESQL_USER
/$POSTGRES_USER
/$USER
,$POSTGRESQL_PASSWORD
/$POSTGRES_PASSWORD
/$PASSWORD
,$POSTGRESQL_DATABASE
/$POSTGRES_DATABASE
/$DATABASE
postgres_foreach_table.sh
- executes a SQL query against every table, replacing{db}
,{schema}
and{table}
in each iteration eg.select count(*) from {table}
postgres_*.sh
- various scripts usingpsql.sh
for row counts, iterating each table, or outputting clean lists of databases, schemas and tables for quick scripting
aws/
directory:
- AWS scripts -
aws_*.sh
:aws_cli_create_credential.sh
- creates an AWS service account user for CI/CD or CLI with Admin permissions (or other group or policy), creates an AWS Access Key, saves a credentials CSV and even prints the shell export commands and aws credentials file config to configure your environment to start using it. Useful trick to avoid CLI reauth toaws sso login
every day.aws_terraform_create_credential.sh
- creates a AWS terraform service account with Administrator permissions for Terraform Cloud or other CI/CD systems to run Terraform plan and apply, since no CI/CD systems can work with AWS SSO workflows. Stores the access key as both CSV and prints shell export commands and credentials file config as above.envrc-aws
- copy to.envrc
for direnv to auto-load AWS configuration settings such as AWS Profile, Compute Region, EKS cluster kubectl context etc.- calls
.envrc-kubernetes
to set thekubectl
context isolated to current shell to prevent race conditions between shells and scripts caused by otherwise naively changing the global~/.kube/config
context
- calls
aws_terraform_create_s3_bucket.sh
- creates a Terraform S3 bucket for storing the backend state, locks out public access, enables versioning, encryption, and locks out Power Users role and optionally any given user/group/role ARNs via a bucket policy for safetyaws_terraform_create_dynamodb_table.sh
- creates a Terraform locking table in DynamoDB for use with the S3 backend, plus custom IAM policy which can be applied to less privileged accountsaws_terraform_create_all.sh
- runs all of the above, plus also applies the custom DynamoDB IAM policy to the user to ensure if the account is less privileged it can still get the Terraform lock (useful for GitHub Actions environment secret for a read only user to generate Terraform Plans in Pull Request without needing approval)aws_terraform_iam_grant_s3_dynamodb.sh
- creates IAM policies to access any S3 buckets and DynamoDB tables withterraform-state
ortf-state
in their names, and attaches them to the given user. Useful for limited permissions CI/CD accounts that run Terraform Plan eg. in GitHub Actions pull requestsaws_account_summary.sh
- prints AWS account summary inkey = value
pairs for easy viewing / grepping of things likeAccountMFAEnabled
,AccountAccessKeysPresent
, useful for checking whether the root account has MFA enabled and no access keys, comparing number of users vs number of MFA devices etc. (see alsocheck_aws_root_account.py
in Advanced Nagios Plugins)aws_billing_alarm.sh
- creates a CloudWatch billing alarm and SNS topic with subscription to email you when you incur charges above a given threshold. This is often the first thing you want to do on an accountaws_budget_alarm.sh
- creates an AWS Budgets billing alarm and SNS topic with subscription to email you when both when you start incurring forecasted charges of over 80% of your budget, and 90% actual usage. This is often the first thing you want to do on an accountaws_batch_stale_jobs.sh
- lists AWS Batch jobs that are older than N hours in a given queueaws_batch_kill_stale_jobs.sh
- finds and kills AWS Batch jobs that are older than N hours in a given queueaws_cloudtrails_cloudwatch.sh
- lists Cloud Trails and their last delivery to CloudWatch Logs (should be recent)aws_cloudtrails_event_selectors.sh
- lists Cloud Trails and their event selectors to check each one has at least one event selectoraws_cloudtrails_s3_accesslogging.sh
- lists Cloud Trails buckets and their Access Logging prefix and target bucket. Checks S3 access logging is enabledaws_cloudtrails_s3_kms.sh
- lists Cloud Trails and whether their S3 buckets are KMS securedaws_cloudtrails_status.sh
- lists Cloud Trails status - if logging, multi-region and log file validation enabledaws_config_all_types.sh
- lists AWS Config recorders, checking all resource types are supported (should be true) and includes global resources (should be true)aws_config_recording.sh
- lists AWS Config recorders, their recording status (should be true) and their last status (should be success)aws_csv_creds.sh
- prints AWS credentials from a CSV file as shell export statements. Useful to quickly switch your shell to some exported credentials from a service account for testing permissions or pipe to upload to a CI/CD system via an API (eg.jenkins_cred_add*.sh
,github_actions_repo*_set_secret.sh
,gitlab_*_set_env_vars.sh
,circleci_*_set_env_vars.sh
,bitbucket_*_set_env_vars.sh
,terraform_cloud_*_set_vars.sh
,kubectl_kv_to_secret.sh
). Supports new user and new access key csv file formats.aws_codecommit_csv_creds.sh
- prints AWS CodeCommit Git credentials from a CSV file as shell export statements. Similar use case and chaining as aboveaws_ec2_list_instance_states.sh
- quickly list AWS EC2 instances, their DNS names and States in an easy to read table outputaws_ec2_terminate_instance_by_name.sh
- terminate an AWS EC2 instance by nameaws_ec2_ebs_*.sh
- AWS EC2 EBS scripts:aws_ec2_ebs_volumes.sh
- list EC2 instances and their EBS volumes in the current region- `aws_ec2_ebs_create_snapshot_and_wait.sh - creates a snapshot of a given EBS volume ID and waits for it to complete with exponential backoff
- `aws_ec2_ebs_resize_and_wait.sh - resizes an EBS volume and waits for it to complete modifying and optionally optimizing with exponential backoff
aws_ec2_ebs_volumes_unattached.sh
- list an unattached EBS volumes in a table format
aws_ecr_*.sh
- AWS ECR docker image management scripts:aws_ecr_docker_build_push.sh
- builds a docker image and pushes it to ECR with not just thelatest
docker tag but also the current Git hashref and Git tagsaws_ecr_list_repos.sh
- lists ECR repos, and their docker image mutability and whether image scanning is enabledaws_ecr_list_tags.sh
- lists all the tags for a given ECR docker imageaws_ecr_newest_image_tags.sh
- lists the tags for the given ECR docker image with the newest creation date (can use this to determine which image version to tag aslatest
)aws_ecr_alternate_tags.sh
- lists all the tags for a given ECR dockerimage:tag
(use arg<image>:latest
to see what version / build hashref / date tag has been tagged aslatest
)aws_ecr_tag_image.sh
- tags an ECR image with another tag without pulling and pushing itaws_ecr_tag_image_by_digest.sh
- same as above but tags an ECR image found via digest (more accurate as reference by existing tag can be a moving target). Useful to recover images that have become untaggedaws_ecr_tag_latest.sh
- tags a given ECR dockerimage:tag
aslatest
without pulling or pushing the docker imageaws_ecr_tag_branch.sh
- tags a given ECRimage:tag
with the current Git branch without pulling or pushing the docker imageaws_ecr_tag_datetime.sh
- tags a given ECR docker image with its creation date and UTC timestamp (when it was uploaded to ECR) without pulling or pushing the docker imageaws_ecr_tag_newest_image_as_latest.sh
- finds and tags the newest build of a given ECR docker image aslatest
without pulling or pushing the docker imageaws_ecr_tags_timestamps.sh
- lists all the tags and their timestamps for a given ECR docker imageaws_ecr_tags_old.sh
- lists tags older than N days for a given ECR docker imageaws_ecr_delete_old_tags.sh
- deletes tags older than N days for a given ECR docker image. Lists the image:tags to be deleted and prompts for confirmation safety
aws_foreach_profile.sh
- executes a templated command across all AWS named profiles configured in AWS CLIv2, replacing{profile}
in each iteration. Combine with other scripts for powerful functionality, auditing, setup etc. eg.aws_kube_creds.sh
to configurekubectl
config to all EKS clusters in all environmentsaws_foreach_region.sh
- executes a templated command against each AWS region enabled for the current account, replacing{region}
in each iteration. Combine with AWS CLI or scripts to find resources across regionsaws_iam_*.sh
- AWS IAM scripts:aws_iam_password_policy.sh
- prints AWS password policy inkey = value
pairs for easy viewing / grepping (used byaws_harden_password_policy.sh
before and after to show the differences)aws_iam_harden_password_policy.sh
- strengthens AWS password policy according to CIS Foundations Benchmark recommendationsaws_iam_replace_access_key.sh
- replaces the non-current IAM access key (Inactive, Not Used, longer time since used, or an explicitly given key), outputting the new key as shell export statements (useful for piping to the same tools listed foraws_csv_creds.sh
above)aws_iam_policies_attached_to_users.sh
- finds AWS IAM policies directly attached to users (anti-best practice) instead of groupsaws_iam_policies_granting_full_access.sh
- finds AWS IAM policies granting full access (anti-best practice)aws_iam_policies_unattached.sh
- lists unattached AWS IAM policiesaws_iam_policy_attachments.sh
- finds all users, groups and roles where a given IAM policy is attached, so that you can remove all these references in your Terraform code and avoid this errorError: error deleting IAM policy arn:aws:iam::***:policy/mypolicy: DeleteConflict: Cannot delete a policy attached to entities.
aws_iam_policy_delete.sh
- deletes an IAM policy, by first handling all prerequisite steps of deleting all prior versions and all detaching all users, groups and rolesaws_iam_generate_credentials_report_wait.sh
- generates an AWS IAM credentials reportaws_iam_users.sh
- list your IAM usersaws_iam_users_access_key_age.sh
- prints AWS users access key status and age (see alsoaws_users_access_key_age.py
in DevOps Python tools which can filter by age and status)aws_iam_users_access_key_age_report.sh
- prints AWS users access key status and age using a bulk credentials report (faster for many users)aws_iam_users_access_key_last_used.sh
- prints AWS users access keys last used dateaws_iam_users_access_key_last_used_report.sh
- same as above using bulk credentials report (faster for many users)aws_iam_users_last_used_report.sh
- lists AWS users password/access keys last used datesaws_iam_users_mfa_active_report.sh
- lists AWS users password enabled and MFA enabled statusaws_iam_users_without_mfa.sh
- lists AWS users with password enabled but no MFAaws_iam_users_mfa_serials.sh
- lists AWS users MFA serial numbers (differentiates Virtual vs Hardware MFAs)aws_iam_users_pw_last_used.sh
- lists AWS users and their password last used date
aws_ip_ranges.sh
- get all AWS IP ranges for a given Region and/or Service using the IP range APIaws_kms_key_rotation_enabled.sh
- lists AWS KMS keys and whether they have key rotation enabledaws_kube_creds.sh
- auto-loads all AWS EKS clusters credentials in the current --profile and --region so your kubectl is ready to rock on AWSaws_kubectl.sh
- runs kubectl commands safely fixed to a given AWS EKS cluster using config isolation to avoid concurrency race conditionsaws_logs_*.sh
- some useful log queries in last N hours (24 hours by default):aws_logs_batch_jobs.sh
- lists AWS Batch job submission requests and their callersaws_logs_ec2_spot.sh
- lists AWS EC2 Spot fleet creation requests, their caller and first tag value for origin hintaws_logs_ecs_tasks.sh
- lists AWS ECS task run requests, their callers and job definitions
aws_meta.sh
- AWS EC2 Metadata API query shortcut. See also the official ec2-metadata shell script with more featuresaws_nat_gateways_public_ips.sh
- lists the public IPs of all NAT gateways. Useful to give to clients to permit through firewalls for webhooks or similar callsaws_rds_open_port_to_my_ip.sh
- adds a security group to an RDS DB instance to open its native database SQL port to your public IP addressaws_rds_get_version.sh
- quickly retrieve the version of an RDS database to know which JDBC jar version to download usinginstall/download_*_jdbc.sh
when setting up connectionsaws_route53_check_ns_records.sh
- checks AWS Route 53 public hosted zones NS servers are delegated in the public DNS hierarchy and that there are no rogue NS servers delegated not matching the Route 53 zone configurationaws_sso_env_creds.sh
- retrieves AWS SSO session credentials in the format of environment export commands for copying to other systems like Terraform Cloudaws_s3_bucket.sh
- creates an S3 bucket, blocks public access, enables versioning, encryption, and optionally locks out any given user/group/role ARNs via a bucket policy for safety (eg. to stop Power Users accessing a sensitive bucket like Terraform state)aws_s3_buckets_block_public_access.sh
- blocks public access to one or more given S3 buckets or files containing bucket names, one per lineaws_s3_account_block_public_access.sh
- blocks S3 public access at the AWS account levelaws_s3_check_buckets_public_blocked.sh
- iterates each S3 bucket and checks it has public access fully blocked via policy. Parallelized for speedupaws_s3_check_account_public_blocked.sh
- checks S3 public access is blocked at the AWS account levelaws_s3_sync.sh
- syncs multiple AWS S3 URLs from file lists. Validates S3 URLs, source and destination list lengths matches, and optionally that path suffixes match, to prevent off-by-one human errors spraying data all over the wrong destination pathsaws_s3_access_logging.sh
- lists AWS S3 buckets and their access logging statusaws_spot_when_terminated.sh
- executes commands when the AWS EC2 instance running this script is notified of Spot Termination, acts as a latch mechanism that can be set any time after bootaws_sqs_check.sh
- sends a test message to an AWS SQS queue, retrieves it to check and then deletes it via the receipt handle idaws_sqs_delete_message.sh
- deletes 1-10 messages from a given AWS SQS queue (to help clear out test messages)aws_ssm_put_param.sh
- reads a value from a command line argument or non-echo prompt and saves it to AWS Systems Manager Parameter Store. Useful for uploading a password without exposing it on your screenaws_secret*.sh
- AWS Secrets Manager scripts:aws_secret_list.sh
- returns the list of secrets, one per lineaws_secret_add.sh
- reads a value from a command line argument or non-echo prompt and saves it to Secrets Manager. Useful for uploading a password without exposing it on your screenaws_secret_add_binary.sh
- base64 encodes a given file's contents and saves it to Secrets Manager as a binary secret. Useful for uploading things like QR code screenshots for sharing MFA to recovery admin accountsaws_secret_update.sh
- reads a value from a command line argument or non-echo prompt and updates a given Secrets Manager secret. Useful for updating a password without exposing it on your screenaws_secret_update_binary.sh
- base64 encodes a given file's contents and updates a given Secrets Manager secret. Useful for updating a QR code screenshot for a root accountaws_secret_get.sh
- gets a secret value for a given secret from Secrets Manager, retrieving either a secure string or secure binary depending on which is available
eksctl_cluster.sh
- downloads eksctl and creates an AWS EKS Kubernetes cluster
See also Knowledge Base notes for AWS.
gcp/
directory:
- Google Cloud scripts -
gcp_*.sh
/gce_*.sh
/gke_*.sh
/gcr_*.sh
/bigquery_*.sh
:.envrc-gcp
- copy to.envrc
for direnv to auto-load GCP configuration settings such as Project, Region, Zone, GKE cluster kubectl context or any other GCloud SDK settings to shortengcloud
commands. Applies to the local shell environment only to avoid race conditions caused by naively changing the global gcloud config at~/.config/gcloud/active_config
- calls
.envrc-kubernetes
to set thekubectl
context isolated to current shell to prevent race conditions between shells and scripts caused by otherwise naively changing the global~/.kube/config
context
- calls
gcp_terraform_create_credential.sh
- creates a service account for Terraform with full permissions, creates and downloads a credential key json and even prints theexport GOOGLE_CREDENTIALS
command to configure your environment to start using Terraform immediately. Run once for each project and combine with direnv for fast easy management of multiple GCP projectsgcp_ansible_create_credential.sh
- creates an Ansible service account with permissions on the current project, creates and downloads a credential key json and prints the environment variable to immediately use itgcp_cli_create_credential.sh
- creates a GCloud SDK CLI service account with full owner permissions to all projects, creates and downloads a credential key json and even prints theexport GOOGLE_CREDENTIALS
command to configure your environment to start using it. Avoids having to reauth togcloud auth login
every day.gcp_spinnaker_create_credential.sh
- creates a Spinnaker service account with permissions on the current project, creates and downloads a credential key json and even prints the Halyard CLI configuration commands to use itgcp_info.sh
- huge Google Cloud inventory of deployed resources within the current project - Cloud SDK info plus all of the following (detects which services are enabled to query):gcp_info_compute.sh
- GCE Virtual Machine instances, App Engine instances, Cloud Functions, GKE clusters, all Kubernetes objects across all GKE clusters (seekubernetes_info.sh
below for more details)gcp_info_storage.sh
- Cloud SQL info below, plus: Cloud Storage Buckets, Cloud Filestore, Cloud Memorystore Redis, BigTable clusters and instances, Datastore indexesgcp_info_cloud_sql.sh
- Cloud SQL instances, whether their backups are enabled, and all databases on each instancegcp_info_cloud_sql_databases.sh
- lists databases inside each Cloud SQL instance. Included ingcp_info_cloud_sql.sh
gcp_info_cloud_sql_backups.sh
- lists backups for each Cloud SQL instance with their dates and status. Not included ingcp_info_cloud_sql.sh
for brevity. See alsogcp_sql_export.sh
further down for more durable backups to GCSgcp_info_cloud_sql_users.sh
- lists users for each running Cloud SQL instance. Not included ingcp_info_cloud_sql.sh
for brevity but useful to audit users
gcp_info_networking.sh
- VPC Networks, Addresses, Proxies, Subnets, Routers, Routes, VPN Gateways, VPN Tunnels, Reservations, Firewall rules, Forwarding rules, Cloud DNS managed zones and verified domainsgcp_info_bigdata.sh
- Dataproc clusters and jobs in all regions, Dataflow jobs in all regions, PubSub messaging topics, Cloud IOT registries in all regionsgcp_info_tools.sh
- Cloud Source Repositories, Cloud Builds, Container Registry images across all major repos (gcr.io
,us.gcr.io
,eu.gcr.io
,asia.gcr.io
), Deployment Manager deploymentsgcp_info_auth_config.sh
- Auth Configurations, Organizations & Current Configgcp_info_projects.sh
- Projects names and IDsgcp_info_services.sh
- Services & APIs enabledgcp_service_apis.sh
- lists all available GCP Services, APIs and their states (enabled/disabled), and providesis_service_enabled()
function used throughout the adjacent scripts to avoid errors and only show relevant enabled services
gcp_info_accounts_secrets.sh
- IAM Service Accounts, Secret Manager secrets
gcp_info_all_projects.sh
- same as above but for all detected projectsgcp_foreach_project.sh
- executes a templated command across all GCP projects, replacing{project_id}
and{project_name}
in each iteration (used bygcp_info_all_projects.sh
to callgcp_info.sh
)gcp_find_orphaned_disks.sh
- lists orphaned disks across one or more GCP projects (not attached to any compute instance)gcp_secret*.sh
- Google Secret Manager scripts:gcp_secret_add.sh
- reads a value from a command line argument or non-echo prompt and saves it to GCP Secrets Manager. Useful for uploading a password without exposing it on your screengcp_secret_add_binary.sh
- uploads a binary file to GCP Secrets Manager by base64 encoding it first. Useful for uploading QR code screenshots. Useful for uploading things like QR code screenshots for sharing MFA to recovery admin accountsgcp_secret_update.sh
- reads a value from a command line argument or non-echo prompt and updates a given GCP Secrets Manager secret. Useful for uploading a password without exposing it on your screengcp_secret_get.sh
- finds the latest version of a given GCP Secret Manager secret and returns its value. Used by adjacent scriptsgcp_secret_label_k8s.sh
- labels a given existing GCP secret with the current kubectl cluster name and namespace for later use bygcp_secrets_to_kubernetes.sh
gcp_secrets_to_kubernetes.sh
- loads GCP secrets to Kubernetes secrets in a 1-to-1 mapping. Can specify a list of secrets or auto-loads all GCP secrets with labelskubernetes-cluster
andkubernetes-namespace
matching the currentkubectl
context (kcd
to the right namespace first, see.bash.d/kubernetes
). See alsokubernetes_get_secret_values.sh
to debug the actual values that got loaded. See also Sealed Secrets / External Secrets in my Kubernetes repogcp_secrets_to_kubernetes_multipart.sh
- creates a Kubernetes secret from multiple GCP secrets (used to putprivate.pem
andpublic.pem
into the same secret to appear as files on volume mounts for apps in pods to use). See also Sealed Secrets / External Secrets in my Kubernetes repogcp_secrets_labels.sh
- lists GCP Secrets and their labels, one per line suitable for quick views or shell pipelinesgcp_secrets_update_lable.sh
- updates all GCP secrets in current project matching label key=value with a new label valuegcp_service_account_credential_to_secret.sh
- creates GCP service account and exports a credential key to GCP Secret Manager (useful to stage or combine withgcp_secrets_to_kubernetes.sh
)
gke_*.sh
- Google Kubernetes Engine scriptsgke_kube_creds.sh
- auto-loads all GKE clusters credentials in the current / given / all projects so your kubectl is ready to rock on GCPgke_kubectl.sh
- runs kubectl commands safely fixed to a given GKE cluster using config isolation to avoid concurrency race conditionsgke_firewall_rule_cert_manager.sh
- creates a GCP firewall rule for a given GKE cluster's masters to access Cert Manager admission webhook (auto-determines the master cidr, network and target tags)gke_firewall_rule_kubeseal.sh
- creates a GCP firewall rule for a given GKE cluster's masters to access Sealed Secrets controller forkubeseal
to work (auto-determines the master cidr, network and target tags)gke_nodepool_nodes.sh
- lists all nodes in a given nodepool on the current GKE cluster via kubectl labels (fast)gke_nodepool_nodes2.sh
- same as above via GCloud SDK (slow, iterates instance groups)gke_nodepool_taint.sh
- taints/untaints all nodes in a given GKE nodepool on the current cluster (seekubectl_node_taints.sh
for a quick way to see taints)gke_nodepool_drain.sh
- drains all nodes in a given nodepool (to decommission or rebuild the node pool, for example with different taints)gke_persistent_volumes_disk_mappings.sh
- lists GKE kubernetes persistent volumes to GCP persistent disk names, along with PVC and namespace, useful when investigating, resizing PVs etc.
gcr_*.sh
- Google Container Registry scripts:gcr_list_tags.sh
- lists all the tags for a given GCR docker imagegcr_newest_image_tags.sh
- lists the tags for the given GCR docker image with the newest creation date (can use this to determine which image version to tag aslatest
)gcr_alternate_tags.sh
- lists all the tags for a given GCR dockerimage:tag
(use arg<image>:latest
to see what version / build hashref / date tag has been tagged aslatest
)gcr_tag_latest.sh
- tags a given GCR dockerimage:tag
aslatest
without pulling or pushing the docker imagegcr_tag_branch.sh
- tags a given GCR dockerimage:tag
with the current Git branch without pulling or pushing the docker imagegcr_tag_datetime.sh
- tags a given GCR docker image with its creation date and UTC timestamp (when it was uploaded or created by Google Cloud Build) without pulling or pushing the docker imagegcr_tag_newest_image_as_latest.sh
- finds and tags the newest build of a given GCR docker image aslatest
without pulling or pushing the docker imagegcr_tags_timestamps.sh
- lists all the tags and their timestamps for a given GCR docker imagegcr_tags_old.sh
- lists tags older than N days for a given GCR docker imagegcr_delete_old_tags.sh
- deletes tags older than N days for a given GCR docker image. Lists the image:tags to be deleted and prompts for confirmation safety- see also cloudbuild.yaml in the Templates repo
- CI/CD on GCP - trigger Google Cloud Build and GKE Kubernetes deployments from orthogonal CI/CD systems like Jenkins / TeamCity:
gcp_ci_build.sh
- script template for CI/CD to trigger Google Cloud Build to build docker container image with extra datetime and latest tagginggcp_ci_deploy_k8s.sh
- script template for CI/CD to deploy GCR docker image to GKE Kubernetes using Kustomize
gce_*.sh
- Google Compute Engine scripts:gce_foreach_vm.sh
- run a command for each GCP VM instance matching the given name/ip regex in the current GCP projectgce_host_ips.sh
- prints the IPs and hostnames of all or a regex match of GCE VMs for use in /etc/hostsgce_ssh.sh
- Runsgcloud compute ssh
to a VM while auto-determining its zone first to override any inherited zone config and make it easier to script iterating through VMsgcs_ssh_keyscan.sh
- SSH keyscans all the GCE VMs returned from the abovegce_host_ips.sh
script and adds them to~/.ssh/known_hosts
gce_meta.sh
- simple script to query the GCE metadata API from within Virtual Machinesgce_when_preempted.sh
- GCE VM preemption latch script - can be executed any time to set one or more commands to execute upon preemptiongce_is_preempted.sh
- GCE VM return true/false if preempted, callable from other scriptsgce_instance_service_accounts.sh
- lists GCE VM instance names and their service accounts
gcp_firewall_disable_default_rules.sh
- disables those lax GCP default network "allow all" firewall rulesgcp_firewall_risky_rules.sh
- lists risky GCP firewall rules that are enabled and allow traffic from 0.0.0.0/0gcp_sql_*.sh
- Cloud SQL scripts:gcp_sql_backup.sh
- creates Cloud SQL backupsgcp_sql_export.sh
- creates Cloud SQL exports to GCSgcp_sql_enable_automated_backups.sh
- enable automated daily Cloud SQL backupsgcp_sql_enable_point_in_time_recovery.sh
- enable point-in-time recovery with write-ahead logsgcp_sql_proxy.sh
- boots a Cloud SQL Proxy to all Cloud SQL instances for fast convenient directpsql
/mysql
access via local sockets. Installs Cloud SQL Proxy if necessarygcp_sql_running_primaries.sh
- lists primary running Cloud SQL instancesgcp_sql_service_accounts.sh
- lists Cloud SQL instance service accounts. Useful for copying to IAM to grant permissions (eg. Storage Object Creator for SQL export backups to GCS)gcp_sql_create_readonly_service_account.sh
- creates a service account with read-only permissions to Cloud SQL eg. to run export backups to GCSgcp_sql_grant_instances_gcs_object_creator.sh
- grants minimal GCS objectCreator permission on a bucket to primary Cloud SQL instances for exports
gcp_cloud_schedule_sql_exports.sh
- creates Google Cloud Scheduler jobs to trigger a Cloud Function via PubSub to run Cloud SQL exports to GCS for all Cloud SQL instances in the current GCP project- the Python GCF function is in the DevOps Python tools repo
bigquery_*.sh
- BigQuery scripts:bigquery_list_datasets.sh
- lists BigQuery datasets in the current GCP projectbigquery_list_tables.sh
- lists BigQuery tables in a given datasetbigquery_list_tables_all_datasets.sh
- lists tables for all datasets in the current GCP projectbigquery_foreach_dataset.sh
- executes a templated command for each datasetbigquery_foreach_table.sh
- executes a templated command for each table in a given datasetbigquery_foreach_table_all_datasets.sh
- executes a templated command for each table in each dataset in the current GCP projectbigquery_table_row_count.sh
- gets the row count for a given tablebigquery_tables_row_counts.sh
- gets the row counts for all tables in a given datasetbigquery_tables_row_counts_all_datasets.sh
- gets the row counts for all tables in all datasets in the current GCP projectbigquery_generate_query_biggest_tables_across_datasets_by_row_count.sh
- generates a BigQuery SQL query to find the top 10 biggest tables by row countbigquery_generate_query_biggest_tables_across_datasets_by_size.sh
- generates a BigQuery SQL query to find the top 10 biggest tables by size- see also the SQL Scripts repo for many more straight BigQuery SQL scripts
- GCP IAM scripts:
gcp_service_account*.sh
:gcp_service_account_credential_to_secret.sh
- creates GCP service account and exports a credential key to GCP Secret Manager (useful to stage or combine withgcp_secrets_to_kubernetes.sh
)gcp_service_accounts_credential_keys.sh
- lists all service account credential keys and expiry dates, cangrep 9999-12-31T23:59:59Z
to find non-expiring keysgcp_service_accounts_credential_keys_age.sh
- lists all service account credential keys age in daysgcp_service_accounts_credential_keys_expired.sh
- lists expired service account credential keys that should be removed and recreated if neededgcp_service_account_members.sh
- lists all members and roles authorized to use any service accounts. Useful for finding GKE Workload Identity mappings
gcp_iam_*.sh
:gcp_iam_roles_in_use.sh
- lists GCP IAM roles in use in the current or all projectsgcp_iam_identities_in_use.sh
- lists GCP IAM identities (users/groups/serviceAccounts) in use in the current or all projectsgcp_iam_roles_granted_to_identity.sh
- lists GCP IAM roles granted to identities matching the regex (users/groups/serviceAccounts) in the current or all projectsgcp_iam_roles_granted_too_widely.sh
- lists GCP IAM roles which have been granted to allAuthenticatedUsers or even worse allUsers (unauthenticated) in one or all projectsgcp_iam_roles_with_direct_user_grants.sh
- lists GCP IAM roles which have been granted directly to users in violation of best-practice group-based managementgcp_iam_serviceaccount_members.sh
- lists members with permissions to use each GCP service accountgcp_iam_serviceaccounts_without_permissions.sh
- finds service accounts without IAM permissionns, useful to detect obsolete service accounts after a 90 day unused permissions clean outgcp_iam_workload_identities.sh
- lists GKE Workload Identity integrations, usesgcp_iam_serviceaccount_members.sh
gcp_iam_users_granted_directly.sh
- lists GCP IAM users which have been granted roles directly in violation of best-practice group-based management
gcs_bucket_project.sh
- finds the GCP project that a given bucket belongs to using the GCP Storage APIgcs_curl_file.sh
- retrieves a GCS file's contents from a given bucket and path using the GCP Storage API. Useful for starting shell pipelines or being called from other scripts
See also Knowledge Base notes for GCP.
kubernetes/
directory:
.envrc-kubernetes
- copy to.envrc
for direnv to auto-load the right Kuberneteskubectl
context isolated to current shell to prevent race conditions between shells and scripts caused by otherwise naively changing the global~/.kube/config
contextaws/eksctl_cluster.sh
- quickly spins up an AWS EKS cluster usingeksctl
with some sensible defaultskubernetes_info.sh
- huge Kubernetes inventory listing of deployed resources across all namespaces in the current cluster / kube context:- cluster-info
- master component statuses
- nodes
- namespaces
- deployments, replicasets, replication controllers, statefulsets, daemonsets, horizontal pod autoscalers
- storage classes, persistent volumes, persistent volume claims
- service accounts, resource quotas, network policies, pod security policies
- container images running
- container images running counts descending
- pods (might be too much detail if you have high replica counts, so done last, comment if you're sure nobody has deployed pods outside deployments)
kubectl.sh
- runs kubectl commands safely fixed to a given context using config isolation to avoid concurrency race conditionskubectl_diff_apply.sh
- generates a kubectl diff and prompts to applykustomize_diff_apply.sh
- runs Kustomize build, precreates any namespaces, shows a kubectl diff of the proposed changes, and prompts to applykustomize_diff_branch.sh
- runs Kustomize build against the current and target base branch for current or all given directories, then shows the diff for each directory. Useful to detect differences when refactoring, such as switching to tagged baseskubectl_create_namespaces.sh
- creates any namespaces in yaml files or stdin, a prerequisite for a diff on a blank install, used by adjacent scripts for safetykubernetes_check_objects_namespaced.sh
- checks Kubernetes yaml(s) for objects which aren't explicitly namespaced, which can easily result in deployments to the wrong namespace. Reads the API resources from your current Kubernetes cluster and if successful excludes cluster-wide objectskustomize_check_objects_namespaced.sh
- checks Kustomize build yaml output for objects which aren't explicitly namespaced (uses above script)kubectl_deployment_pods.sh
- gets the pod names with their unpredictable suffixes for a given deployment by querying the deployment's selector labels and then querying pods that match those labelskubectl_get_all.sh
- finds all namespaced Kubernetes objects and requests them for the current or given namespace. Useful becausekubectl get all
misses a lof of object typeskubectl_get_annotation.sh
- find a type of object with a given annotationkubectl_restart.sh
- restarts all or filtered deployments/statefulsets in the current or given namespace. Useful when debugging or clearing application problemskubectl_logs.sh
- tails all containers in all pods or filtered pods in the current or given namespace. Useful when debugging a distributed set of pods in live testingkubectl_kv_to_secret.sh
- creates a Kuberbetes secret fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)kubectl_secret_values.sh
- prints the keys and base64 decoded values within a given Kubernetes secret for quick debugging of Kubernetes secrets. See also:gcp_secrets_to_kubernetes.sh
kubectl_secrets_download.sh
- downloads all secrets in current or given namespace to local files of the same name, useful as a backup before migrating to Sealed Secretskubernetes_secrets_compare_gcp_secret_manager.sh
- compares each Kubernetes secret to the corresponding secret in GCP Secret Manager. Useful to safety check GCP Secret Manager values align before enabling External Secrets to replace themkubernetes_secret_to_external_secret.sh
- generates an External Secret from an existing Kubernetes secretkubernetes_secrets_to_external_secrets.sh
- generates External Secrets from all existing Kubernetes secrets found in the current or given namespacekubernetes_secret_to_sealed_secret.sh
- generates a Bitnami Sealed Secret from an existing Kubernetes secretkubernetes_secrets_to_sealed_secrets.sh
- generates Bitnami Sealed Secrets from all existing Kubernetes secrets found in the current or given namespacekubectl_secrets_annotate_to_be_sealed.sh
- annotates secrets in current or given namespace to allow being overwritten by Sealed Secrets (useful to sync ArgoCD health)kubectl_secrets_not_sealed.sh
- finds secrets with no SealedSecret ownerReferenceskubectl_secrets_to_be_sealed.sh
- finds secrets pending overwrite by Sealed Secrets with the managed annotationkubernetes_foreach_context.sh
- executes a command across all kubectl contexts, replacing{context}
in each iteration (skips lab contextsdocker
/minikube
/minishift
to avoid hangs since they're often offline)kubernetes_foreach_namespace.sh
- executes a command across all kubernetes namespaces in the current cluster context, replacing{namespace}
in each iteration- Can be chained with
kubernetes_foreach_context.sh
and useful when combined withgcp_secrets_to_kubernetes.sh
to load all secrets from GCP to Kubernetes for the current cluster, or combined withgke_kube_creds.sh
andkubernetes_foreach_context.sh
for all clusters!
- Can be chained with
kubernetes_api.sh
- finds Kubernetes API and runs your curl arguments against it, auto-getting authorization token and auto-populating OAuth authentication headerkubernetes_autoscaler_release.sh
- finds the latest Kubernetes Autoscaler release that matches your local Kubernetes cluster version using kubectl and the GitHub API. Useful for quickly finding the image override version foreks-cluster-autoscaler-kustomization.yaml
in the Kubernetes configs repokubernetes_etcd_backup.sh
- creates a timestamped backup of the Kubernetes Etcd database for a kubeadm clusterkubernetes_delete_stuck_namespace.sh
- to forcibly delete those pesky kubernetes namespaces of 3rd party apps like Knative that get stuck and hang indefinitely on the finalizers during deletionkubeadm_join_cmd.sh
- outputskubeadm join
command (generates new token) to join an existing Kubernetes cluster (used in vagrant kubernetes provisioning scripts)kubeadm_join_cmd2.sh
- outputskubeadm join
command manually (calculates cert hash + generates new token) to join an existing Kubernetes clusterkubectl_exec.sh
- finds and execs to the first Kubernetes pod matching the given name regex, optionally specifying the container name regex to exec to, and shows the full generatedkubectl exec
command line for claritykubectl_exec2.sh
- finds and execs to the first Kubernetes pod matching given pod filters, optionally specifying the container to exec to, and shows the full generatedkubectl exec
command line for claritykubectl_pods_per_node.sh
- lists number of pods per node sorted descendingkubectl_pods_important.sh
- lists important pods and their nodes to check on schedulingkubectl_pods_colocated.sh
- lists pods from deployments/statefulsets that are colocated on the same nodekubectl_node_labels.sh
- lists nodes and their labels, one per line, easier to read visually or pipe in scriptingkubectl_pods_running_with_labels.sh
- lists running pods with labels matching key=value pair argumentskubectl_node_taints.sh
- lists nodes and their taintskubectl_jobs_stuck.sh
- finds Kubernetes jobs stuck for hours or days with no completionskubectl_jobs_delete_stuck.sh
- prompts for confirmation to delete stuck Kubernetes jobs found by script abovekubectl_images.sh
- lists Kubernetes container images running on the current clusterkubectl_image_counts.sh
- lists Kubernetes container images running counts sorted descendingkubectl_image_deployments.sh
- lists which deployments, statefulsets or daemonsets container images belong to. Useful to find which deployment, statefulset or daemonset to upgrade to replace a container image eg. when replacing deprecated the k8s.gcr.io registry with registry.k8s.iokubectl_pod_count.sh
- lists Kubernetes pods total running countkubectl_pod_labels.sh
- lists Kubernetes pods and their labels, one label per line for easier shell script piping for further actionskubectl_pod_ips.sh
- lists Kubernetes pods and their pod IP addresseskubectl_container_count.sh
- lists Kubernetes containers total running countkubectl_container_counts.sh
- lists Kubernetes containers running counts by name sorted descendingkubectl_pods_dump_*.sh
- dump stats / logs / jstacks from all pods matching a given regex and namespace to txt files for support debuggingkubectl_pods_dump_stats.sh
- dump statskubectl_pods_dump_logs.sh
- dump logskubectl_pods_dump_jstacks.sh
- dump Java jstackskubectl_pods_dump_all.sh
- calls the abovekubectl_pods_dump_*.sh
scripts for N iterations with a given interval
kubectl_empty_namespaces.sh
- finds namespaces without any of the usual objects usingkubectl get all
kubectl_delete_empty_namespaces.sh
- removes empty namespaces, useskubectl_empty_namespaces.sh
kubectl_<image>.sh
- quick launch one-off pods for interactive debuggging in Kuberneteskubectl_alpine.sh
kubectl_busybox.sh
kubectl_curl.sh
kubectl_dnsutils.sh
kubectl_gcloud_sdk.sh
kubectl_run_sa.sh
- launch a quick pod with the given service account to test private repo pull & other permissions
kubectl_port_forward.sh
- launcheskubectl port-forward
to a given pod's port with an optional label or name filter. If more than one pod is found, prompts with an interactive dialogue to choose one. Optionally automatically opens the forwarded localhost URL in the default browserkubectl_port_forward_spark.sh
- does the above for Spark UI
helm_template.sh
- templates a Helm chart for Kustomize deploymentskustomize_parse_helm_charts.sh
- parses the Helm charts from one or morekustomization.yaml
files into TSV format for further shell pipe processingkustomize_install_helm_charts.sh
- installs the Helm charts from one or morekustomization.yaml
files the old fashioned Helm CLI way so that tools like Nova can be used to detect outdated charts (used in Kubernetes-configs repo's CI)kustomize_update_helm_chart_versions.sh
- updates one or morekustomization.yaml
files to the latest versions of any charts they containkustomize_materialize.sh
- recursively materializes allkustomization.yaml
tokustomization.materialized.yaml
in the same directories for scanning with tools like Pluto to detect deprecated API objects inherited from embedded Helm charts. Parallelized for performance- ArgoCD:
argocd_auto_sync.sh
- toggle Auto-sync on/off to allow repairs and maintenance operation for a given app and also disables / re-enables the App-of-Apps base apps to stop then re-enabling the appargocd_apps_sync.sh
- sync's all ArgoCD apps matching an optional ERE regex filter on their names using the ArgoCD CLIargocd_apps_wait_sync.sh
- sync's all ArgoCD apps matching an optional ERE regex filter on their names using the ArgoCD CLI's while also checking their health and operationargocd_generate_resource_whitelist.sh
- generates a yaml cluster and namespace resource whitelist for ArgoCD project config. If given an existing yaml, will merge in its original whitelists, dedupe, and write them back into the file using an in-place edit. Useful because ArgoCD 2.2+ doesn't show resources that aren't explicitly allowed, such as ReplicaSets and Pods
- Pluto:
pluto_detect_helm_materialize.sh
- recursively materializes all helmChart.yaml
and runs Pluto on each directory to work around this issuepluto_detect_kustomize_materialize.sh
- recursively materializes allkustomization.yaml
and runs Pluto on each directory to work around this issuepluto_detect_kubectl_dump_objects.sh
- dumps all live Kubernetes objects to /tmp all can run Pluto to detect deprecated API objects on the cluster from any source
- Rancher:
rancher_api.sh
- queries the Rancher API with authenticationrancher_kube_creds.sh
- downloads all Rancher clusters credentials into subdirectories matching cluster names, with.envrc
in each, so a quickcd
into one and your kubectl is ready to rock
- see also Google Kubernetes Engine scripts in the GCP - Google Cloud Platform section above
- see also the Kubernetes configs repo
See also Knowledge Base notes for Kubernetes.
docker/
directory:
docker_*.sh
/dockerhub_*.sh
- Docker / DockerHub API scripts:dockerhub_api.sh
- queries DockerHub API v2 with or without authentication ($DOCKERHUB_USER
&$DOCKERHUB_PASSWORD
/$DOCKERHUB_TOKEN
)docker_api.sh
- queries a Docker Registry with optional basic authentication if$DOCKER_USER
&$DOCKER_PASSWORD
are setdocker_build_hashref.sh
- runsdocker build
and auto-generates docker image name and tag from relative Git path and commit short SHA hashref and a dirty sha suffix if git contents are modified. Useful to compare docker image sizes between your clean and modified versions ofDockerfile
or contentsdocker_registry_list_images.sh
- lists images in a given private Docker Registrydocker_registry_list_tags.sh
- lists tags for a given image in a private Docker Registrydocker_registry_get_image_manifest.sh
- gets a given image:tag manifest from a private Docker Registrydocker_registry_tag_image.sh
- tags a given image with a new tag in a private Docker Registry via the API without pulling and pushing the image data (must faster and more efficient)dockerhub_list_tags.sh
- lists tags for a given DockerHub repo. See also dockerhub_show_tags.py in the DevOps Python tools repo.dockerhub_list_tags_by_last_updated.sh
- lists tags for a given DockerHub repo sorted by last updated timestamp descendingdockerhub_search.sh
- searches with a configurable number of returned items (older docker cli was limited to 25 results)clean_caches.sh
- cleans out OS package and programming language caches, call near end ofDockerfile
to reduce Docker image size- see also the Dockerfiles repo
quay_api.sh
- queries the Quay.io API with OAuth2 authentication token$QUAY_TOKEN
See also Knowledge Base notes for Docker.
data/
directory:
-
avro_tools.sh
- runs Avro Tools jar, downloading it if not already present (determines latest version when downloading) -
parquet_tools.sh
- runs Parquet Tools jar, downloading it if not already present (determines latest version when downloading) -
csv_header_indices.sh
- list CSV headers with their zero indexed numbers, useful reference when coding against column positions -
Data format validation
validate_*.py
from DevOps Python Tools repo: -
json2yaml.sh
- converts JSON to YAML -
yaml2json.sh
- converts YAML to JSON - needed for some APIs like GitLab CI linting (see Gitlab section above)
bigdata/
and kafka/
directories:
kafka_*.sh
- scripts to make Kafka CLI usage easier including auto-setting Kerberos to source TGT from environment and auto-populating broker and zookeeper addresses. These are auto-added to the$PATH
when.bashrc
is sourced. For something similar for Solr, seesolr_cli.pl
in the DevOps Perl Tools repo.zookeeper*.sh
- Apache ZooKeeper scripts:zookeeper_client.sh
- shortenszookeeper-client
command by auto-populating the zookeeper quorum from the environment variable$ZOOKEEPERS
or else parsing the zookeeper quorum from/etc/**/*-site.xml
to make it faster and easier to connectzookeeper_shell.sh
- shortens Kafka'szookeeper-shell
command by auto-populating the zookeeper quorum from the environment variable$KAFKA_ZOOKEEPERS
and optionally$KAFKA_ZOOKEEPER_ROOT
to make it faster and easier to connect
hive_*.sh
/beeline*.sh
- Apache Hive scripts:beeline.sh
- shortensbeeline
command to connect to HiveServer2 by auto-populating Kerberos and SSL settings, zookeepers for HiveServer2 HA discovery if the environment variable$HIVE_HA
is set or using the$HIVESERVER_HOST
environment variable so you can connect with no arguments (prompts for HiveServer2 address if you haven't set$HIVESERVER_HOST
or$HIVE_HA
)beeline_zk.sh
- same as above for HiveServer2 HA by auto-populating SSL and ZooKeeper service discovery settings (specify$HIVE_ZOOKEEPERS
environment variable to override). Automatically called bybeeline.sh
if either$HIVE_ZOOKEEPERS
or$HIVE_HA
is set (the latter parseshive-site.xml
for the ZooKeeper addresses)
hive_foreach_table.sh
- executes a SQL query against every table, replacing{db}
and{table}
in each iteration eg.select count(*) from {table}
hive_list_databases.sh
- list Hive databases, one per line, suitable for scripting pipelineshive_list_tables.sh
- list Hive tables, one per line, suitable for scripting pipelineshive_tables_metadata.sh
- lists a given DDL metadata field for each Hive table (to compare tables)hive_tables_location.sh
- lists the data location per Hive table (eg. compare external table locations)hive_tables_row_counts.sh
- lists the row count per Hive tablehive_tables_column_counts.sh
- lists the column count per Hive table
impala*.sh
- Apache Impala scripts:impala_shell.sh
- shortensimpala-shell
command to connect to Impala by parsing the Hadoop topology map and selecting a random datanode to connect to its Impalad, acting as a cheap CLI load balancer. For a real load balancer see HAProxy config for Impala (and many other Big Data & NoSQL technologies). Optional environment variables$IMPALA_HOST
(eg. point to an explicit node or an HAProxy load balancer) andIMPALA_SSL=1
(or use regular impala-shell--ssl
argument pass through)impala_foreach_table.sh
- executes a SQL query against every table, replacing{db}
and{table}
in each iteration eg.select count(*) from {table}
impala_list_databases.sh
- list Impala databases, one per line, suitable for scripting pipelinesimpala_list_tables.sh
- list Impala tables, one per line, suitable for scripting pipelinesimpala_tables_metadata.sh
- lists a given DDL metadata field for each Impala table (to compare tables)impala_tables_location.sh
- lists the data location per Impala table (eg. compare external table locations)impala_tables_row_counts.sh
- lists the row count per Impala tableimpala_tables_column_counts.sh
- lists the column count per Impala table
hdfs_*.sh
- Hadoop HDFS scripts:hdfs_checksum*.sh
- walks an HDFS directory tree and outputs HDFS native checksums (faster) or portable externally comparable CRC32, in serial or in parallel to save timehdfs_find_replication_factor_1.sh
/hdfs_set_replication_factor_3.sh
- finds HDFS files with replication factor 1 / sets HDFS files with replication factor <=2 to replication factor 3 to repair replication safety and avoid no replica alarms during maintenance operations (see also Python API version in the DevOps Python Tools repo)hdfs_file_size.sh
/hdfs_file_size_including_replicas.sh
- quickly differentiate HDFS files raw size vs total replicated sizehadoop_random_node.sh
- picks a random Hadoop cluster worker node, like a cheap CLI load balancer, useful in scripts when you want to connect to any worker etc. See also the read HAProxy Load Balancer configurations which focuses on master nodes
cloudera_*.sh
- Cloudera scripts:cloudera_manager_api.sh
- script to simplify querying Cloudera Manager API using environment variables, prompts, authentication and sensible defaults. Built on top ofcurl_auth.sh
cloudera_manager_impala_queries*.sh
- queries Cloudera Manager for recent Impala queries, failed queries, exceptions, DDL statements, metadata stale errors, metadata refresh calls etc. Built on top ofcloudera_manager_api.sh
cloudera_manager_yarn_apps.sh
- queries Cloudera Manager for recent Yarn apps. Built on top ofcloudera_manager_api.sh
cloudera_navigator_api.sh
- script to simplify querying Cloudera Navigator API using environment variables, prompts, authentication and sensible defaults. Built on top ofcurl_auth.sh
cloudera_navigator_audit_logs.sh
- fetches Cloudera Navigator audit logs for given service eg. hive/impala/hdfs via the API, simplifying date handling, authentication and common settings. Built on top ofcloudera_navigator_api.sh
cloudera_navigator_audit_logs_download.sh
- downloads Cloudera Navigator audit logs for each service by year. Skips existing logs, deletes partially downloaded logs on failure, generally retry safe (while true, Control-C, notkill -9
obviously). Built on top ofcloudera_navigator_audit_logs.sh
See also Knowledge Base notes for Hadoop.
git/
, github/
, gitlab/
, bitbucket/
and azure_devops/
directories:
git/*.sh
- Git scripts:precommit_run_changed_files.sh
- runs pre-commit on all files changed on the current branch vs the default branch. Useful to reproducepre-commit
checks that are failing in pull requests to get your PRs to passgit_diff_commit.sh
- runs git diff and commit with a generic"updated $filename"
commit message for each file in the files or directories given if they have changed, or all committed files under$PWD
if no args are given. Super convenient for fast commits on the command line, and in vim and IDEs via hotkeysgit_review_push.sh
- shows diff of what would be pushed upstream and prompts to push. Convenient for fast reviewed pushes via vim or IDEs hotkeysgit_branch_delete_squash_merged.sh
- carefully detects if a squash merged branch you want to delete has no changes with the default trunk branch before deleting it. See Squash Merges in knowledge-base about why this is necessary.git_foreach_branch.sh
- executes a command on all branches (useful in heavily version branched repos like in my Dockerfiles repo)git_foreach_repo.sh
- executes a command against all adjacent repos from a given repolist (used heavily by many adjacent scripts)git_foreach_modified.sh
- executes a command against each file with git modified statusgit_merge_all.sh
/git_merge_master.sh
/git_merge_master_pull.sh
- merges updates from master branch to all other branches to avoid drift on longer lived feature branches / version branches (eg. Dockerfiles repo)git_remotes_add_origin_providers.sh
- auto-creates remotes for the 4 major public repositories (GitHub/GitLab/Bitbucket/Azure DevOps), useful forgit pull -all
to fetch and merge updates from all providers in one commandgit_remotes_set_multi_origin.sh
- sets up multi-remote origin for unified push to automatically keep the 4 major public repositories in sync (especially useful for Bitbucket and Azure DevOps which don't have GitLab's auto-mirroring from GitHub feature)git_remotes_set_https_to_ssh.sh
- converts local repo's remote URLs from https to ssh (more convenient with SSH keys instead of https auth tokens, especially since Azure DevOps expires personal access tokens every year)git_remotes_set_ssh_to_https.sh
- converts local repo's remote URLs from ssh to https (to get through corporate firewalls or hotels if you travel a lot)git_remotes_set_https_creds_helpers.sh
- adds Git credential helpers configuration to the local git repo to use http API tokens dynamically from environment variables if they're setgit_repos_pull.sh
- pull multiple repos based on a source file mapping list - useful for easily sync'ing lots of Git repos among computersgit_repos_update.sh
- same as above but also runs themake update
build to install the latest dependencies, leverages the above scriptgit_grep_env_vars.sh
- find environment variables in the current git repo's code base in the formatSOME_VAR
(useful to find undocumented environment variables in internal or open source projects such as ArgoCD eg. argoproj/argocd-cd #8680)git_log_empty_commits.sh
- find empty commits in git history (eg. if agit filter-branch
was run but--prune-empty
was forgotten, leaking metadata like subjects containing file names or other sensitive info)git_files_in_history.sh
- finds all filename / file paths in the git log history, useful for prepping forgit filter-branch
git_filter_branch_fix_author.sh
- rewrites Git history to replace author/committer name & email references (useful to replace default account commits). Powerful, read--help
andman git-filter-branch
carefully. Should only be used by Git Expertsgit_filter_repo_replace_text.sh
- rewrites Git history to replace a given text to scrub a credential or other sensitive token from history. Refuses to operate on tokens less than 8 chars for safetygit_tag_release.sh
- creates a Git tag, auto-incrementing a.N
suffix on the year/month/day date format if no exact version givengit_submodules_update_repos.sh
- updates submodules (pulls and commits latest upstream github repo submodules) - used to cascade submodule updates throughout all my reposgit_askpass.sh
- credential helper script to use environment variables for git authenticationmarkdown_generate_index.sh
- generates a markdown index list from the headings in a given markdown file such as README.mdmarkdown_replace_index.sh
- replaces a markdown index section in a given markdown file usingmarkdown_generate_index.sh
github/*.sh
- GitHub API / CLI scripts:github_api.sh
- queries the GitHub API. Can infer GitHub user, repo and authentication token from local checkout or environment ($GITHUB_USER
,$GITHUB_TOKEN
)github_install_binary.sh
- installs a binary from GitHub releases into $HOME/bin or /usr/local/bin. Auto-determines the latest release if no version specified, detects and unpacks any tarball or zip filesgithub_foreach_repo.sh
- executes a templated command for each non-fork GitHub repo, replacing the{owner}
/{name}
or{repo}
placeholders in each iterationgithub_clone_or_pull_all_repos.sh
- git clones or pulls all repos for a user or organization into directories of the same name under the current directorygithub_download_release_file.sh
- downloads a file from GitHub Releases, optionally determining the latest version, usesbin/download_url_file.sh
github_download_release_jar.sh
- downloads a JAR file from GitHub Releases (used byinstall/download_*_jar.sh
for things like JDBC drivers or Java decompilers), optionally determines latest version to download, and finally validates the downloaded file's formatgithub_invitations.sh
- lists / accepts repo invitations. Useful to accept a large number of invites to repos generated by automationgithub_mirror_repos_to_gitlab.sh
- creates/syncs GitHub repos to GitLab for migrations or to cron fast free Disaster Recovery, including all branches and tags, plus the repo descriptions. Note this doesn't include PRs/wikis/releasesgithub_mirror_repos_to_bitbucket.sh
- creates/syncs GitHub repos to BitBucket for migrations or to cron fast free Disaster Recovery, including all branches and tags, plus the repo descriptions. Note this doesn't include PRs/wikis/releasesgithub_mirror_repos_to_aws_codecommit.sh
- creates/syncs GitHub repos to AWS CodeCommit for migrations or to cron fast almost free Disaster Recovery (close to $0 compared to $100-400+ per month for Rewind BackHub), including all branches and tags, plus the repo descriptions. Note this doesn't include PRs/wikis/releasesgithub_mirror_repos_to_gcp_source_repos.sh
- creates/syncs GitHub repos to GCP Source Repos for migrations or to cron fast almost free Disaster Recovery (close to $0 compared to $100-400+ per month for Rewind BackHub), including all branches and tags. Note this doesn't include repo description/PRs/wikis/releasesgithub_pull_request_create.sh
- creates a Pull Request idempotently by first checking for an existing PR between the branches, and also checking if there are the necessary commits between the branches, to avoid common errors from blindly raising PRs. Useful to automate code promotion across environment branches. Also works across repo forks and is used bygithub_repo_fork_update.sh
. Even populates github pull request template and does Jira ticket number replacement from branch prefixgithub_pull_request_preview.sh
- opens a GitHub Pull Request preview page from the current local branch to the given or default branchgithub_push_pr_preview.sh
- pushes to GitHub origin, sets upstream branch, then open a Pull Request preview from current branch to the given or default trunk branch in your browsergithub_push_pr.sh
- pushes to GitHub origin, sets upstream branch, then idemopotently creates a Pull Request from current branch to the given or default trunk branch and opens the generated PR in your browser for reviewgithub_merge_branch.sh
- merges one branch into another branch via a Pull Request for full audit tracking all changes. Useful to automate feature PRs, code promotion across environment branches, or backport hotfixes from Production or Staging to trunk branches such as master, main, dev or developgithub_remote_set_upstream.sh
- in a forked GitHub repo's checkout, determine the origin of the fork using GitHub CLI and configure a git remote to the upstream. Useful to be able to easily pull updates from the original source repogithub_pull_merge_trunk.sh
- pulls the origin or fork upstream repo's trunk branch and merges it into the local branch, In a forked GitHub repo's checkout, determines the origin of the fork using GitHub CLI, configures a git remote to the upstream, pulls the default branch and if on a branch other than the default then merges the default branch to the local current branch. Simplifies and automates keeping your checkout or forked repo up to date with the original source repo to quickly resolve merge conflicts locally and submit updated Pull Requestsgithub_forked_add_remote.sh
- quickly adds a forked repo as a remote from an interactive men list of forked reposgithub_forked_checkout_branch.sh
- quickly check out a forked repo's branch from an interactive menu lists of forked repos and their branchesgithub_actions_foreach_workflow.sh
- executes a templated command for each workflow in a given GitHub repo, replacing{name}
,{id}
and{state}
in each iterationgithub_actions_aws_create_load_credential.sh
- creates an AWS user with group/policy, generates and downloads access keys, and uploads them to the given repogithub_actions_in_use.sh
- lists GitHub Actions directly referenced in the .github/workflows in the current local repo checkoutgithub_actions_in_use_repo.sh
- lists GitHub Actions for a given repo via the API, including following imported reusable workflowsgithub_actions_in_use_across_repos.sh
- lists GitHub Actions in use across all your reposgithub_actions_repos_lockdown.sh
- secures GitHub Actions settings across all user repos to only GitHub, verified partners and selected 3rd party actionsgithub_actions_repo_set_secret.sh
- sets a secret in the given repo fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)github_actions_repo_env_set_secret.sh
- sets a secret in the given repo and environment fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)github_actions_repo_secrets_overriding_org.sh
- finds any secrets for a repo that are overriding organization level secrets. Useful to combine withgithub_foreach_repo.sh
for auditinggithub_actions_repo_restrict_actions.sh
- restricts GitHub Actions in the given repo to only running actions from GitHub and verfied partner companies (.eg AWS, Docker)github_actions_repo_actions_allow.sh
- allows select 3rd party GitHub Actions in the given repogithub_actions_runner.sh
- generates a GitHub Actions self-hosted runner token for a given Repo or Organization via the GitHub API and then runs a dockerized GitHub Actions runner with the appropriate configurationgithub_actions_runner_local.sh
- downloads, configures and runs a local GitHub Actions Runner for Linux or Macgithub_actions_runner_token.sh
- generates a GitHub Actions runner token to register a new self-hosted runnergithub_actions_runners.sh
- lists GitHub Actions self-hosted runners for a given Repo or Organizationgithub_actions_delete_offline_runners.sh
- deletes offline GitHub Actions self-hosted runners. Useful to clean up short-lived runners eg. Docker, Kubernetesgithub_actions_workflows.sh
- lists GitHub Actions workflows for a given repo (or auto-infers local repository)github_actions_workflow_runs.sh
- lists GitHub Actions workflow runs for a given workflow id or namegithub_actions_workflows_status.sh
- lists all GitHub Actions workflows and their statuses for a given repogithub_actions_workflows_state.sh
- lists GitHub Actions workflows enabled/disabled states (GitHub now disables workflows after 6 months without a commit)github_actions_workflows_disabled.sh
- lists GitHub Actions workflows that are disabled. Combine withgithub_foreach_repo.sh
to scan all repos to find disabled workflowsgithub_actions_workflow_enable.sh
- enables a given GitHub Actions workflowgithub_actions_workflows_enable_all.sh
- enables all GitHub Actions workflows in a given repo. Useful to undo GitHub disabling all workflows in a repo after 6 months without a commitgithub_actions_workflows_trigger_all.sh
- triggers all workflows for the given repogithub_actions_workflows_cancel_all_runs.sh
- cancels all workflow runs for the given repogithub_actions_workflows_cancel_waiting_runs.sh
- cancels workflow runs that are in waiting state, eg. waiting for old deployment approvalsgithub_ssh_get_user_public_keys.sh
- fetches a given GitHub user's public SSH keys via the API for piping to~/.ssh/authorized_keys
or adjacent toolsgithub_ssh_get_public_keys.sh
- fetches the currently authenticated GitHub user's public SSH keys via the API, similar to above but authenticated to get identifying key commentsgithub_ssh_add_public_keys.sh
- uploads SSH keys from local files or standard input to the currently authenticated GitHub account. Specify pubkey files (default:~/.ssh/id_rsa.pub
) or read from standard input for piping from adjacent toolsgithub_ssh_delete_public_keys.sh
- deletes given SSH keys from the currently authenticated GitHub account by key id or title regex matchgithub_gpg_get_user_public_keys.sh
- fetches a given GitHub user's public GPG keys via the APIgithub_generate_status_page.sh
- generates a STATUS.md page by merging all the README.md headers for all of a user's non-forked GitHub repos or a given list of any repos etc.github_purge_camo_cache.sh
- send HTTP Purge requests to all camo urls (badge caches) for the current or given GitHub repo's landing/README.md pagegithub_ip_ranges.sh
- returns GitHub's IP ranges, either all by default or for a select given service such as hooks or actionsgithub_sync_repo_descriptions.sh
- syncs GitHub repo descriptions to GitLab & BitBucket reposgithub_release.sh
- creates a GitHub Release, auto-incrementing a.N
suffix on the year/month/day date format if no exact version givengithub_repo_description.sh
- fetches the given repo's description (used bygithub_sync_repo_descriptions.sh
)github_repo_find_files.sh
- finds files matching a regex in the current or given GitHub repo via the GitHub APIgithub_repo_latest_release.sh
- returns the latest release tag for a given GitHub repo via the GitHub APIgithub_repo_latest_release_filter.sh
- returns the latest release tag matching a given regex filter for a given GitHub repo via the GitHub API. Useful for getting the latest version of things like Kustomize which has other releases for kyamlgithub_repo_stars.sh
- fetches the stars, forks and watcher counts for a given repogithub_repo_teams.sh
- fetches the GitHub Enterprise teams and their role permisions for a given repo. Combine withgithub_foreach_repo.sh
to audit your all your personal or GitHub organization's reposgithub_repo_collaborators.sh
- fetches a repo's granted users and outside invited collaborators as well as their role permisions for a given repo. Combine withgithub_foreach_repo.sh
to audit your all your personal or GitHub organization's reposgithub_repo_protect_branches.sh
- enables branch protections on the given repo. Can specify one or more branches to protect, otherwise finds and applies to any ofmaster
,main
,develop
,dev
,staging
,production
github_repos_find_files.sh
- finds files matching a regex across all repos in the current GitHub organization or user accountgithub_repo_fork_sync.sh
- sync's current or given fork, then runsgithub_repo_fork_update.sh
to cascade changes to major branches via Pull Requests for auditabilitygithub_repo_fork_update.sh
- updates a forked repo by creating pull requests for full audit tracking and auto-merges PRs for non-production branchesgithub_repos_public.sh
- lists public repos for a user or organization. Useful to periodically scan and account for any public reposgithub_repos_disable_wiki.sh
- disables the Wiki on one or more given repos to prevent documentation fragmentation and make people use the centralized documentation tool eg. Confluence or Slitegithub_repos_with_few_users.sh
- finds repos with few or no users (default: 1), which in Enterprises is a sign that a user has created a repo without assigning team privilegesgithub_repos_with_few_teams.sh
- finds repos with few or no teams (default: 0), which in Enterprises is a sign that a user has created a repo without assigning team privilegesgithub_repos_without_branch_protections.sh
- finds repos without any branch protection rules (usegithub_repo_protect_branches.sh
on such repos)github_repos_not_in_terraform.sh
- finds all non-fork repos for current or given user/organization which are not found in$PWD/*.tf
Terraform codegithub_teams_not_in_terraform.sh
- finds all teams for given organization which are not found in$PWD/*.tf
Terraform codegithub_repos_sync_status.sh
- determines whether each GitHub repo's mirrors on GitLab / BitBucket / Azure DevOps are up to date with the latest commits, by querying all 3 APIs and comparing master branch hashrefsgithub_teams_not_idp_synced.sh
- finds GitHub teams that aren't sync'd from an IdP like Azure AD. These should usually be migrated or removedgithub_user_repos_stars.sh
- fetches the total number of stars for all original source public repos for a given usergithub_user_repos_forks.sh
- fetches the total number of forks for all original source public repos for a given usergithub_user_repos_count.sh
- fetches the total number of original source public repos for a given usernamegithub_user_followers.sh
- fetches the number of followers for a given username
gitlab/*.sh
- GitLab API scripts:gitlab_api.sh
- queries the GitLab API. Can infer GitLab user, repo and authentication token from local checkout or environment ($GITLAB_USER
,$GITLAB_TOKEN
)gitlab_install_binary.sh
- installs a binary from GitLab releases into $HOME/bin or /usr/local/bin. Auto-determines the latest release if no version specified, detects and unpacks any tarball or zip filesgitlab_push_mr_preview.sh
- pushes to GitLab origin, sets upstream branch, then open a Merge Request preview from current to default branchgithub_push_mr.sh
- pushes to GitLab origin, sets upstream branch, then idemopotently creates a Merge Request from current branch to the given or default trunk branch and opens the generated MR in your browser for reviewgitlab_foreach_repo.sh
- executes a templated command for each GitLab project/repo, replacing the{user}
and{project}
in each iterationgitlab_project_latest_release.sh
- returns the latest release tag for a given GitLab project (repo) via the GitLab APIgitlab_project_set_description.sh
- sets the description for one or more projects using the GitLab APIgitlab_project_set_env_vars.sh
- adds / updates GitLab project-level environment variable(s) via the API fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)gitlab_group_set_env_vars.sh
- adds / updates GitLab group-level environment variable(s) via the API fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)gitlab_project_create_import.sh
- creates a GitLab repo as an import from a given URL, and mirrors if on GitLab Premium (can only manually configure for public repos on free tier, API doesn't support configuring even public repos on free)gitlab_project_protect_branches.sh
- enables branch protections on the given project. Can specify one or more branches to protect, otherwise finds and applies to any ofmaster
,main
,develop
,dev
,staging
,production
gitlab_project_mirrors.sh
- lists each GitLab repo and whether it is a mirror or notgitlab_pull_mirror.sh
- trigger a GitLab pull mirroring for a given project's repo, or auto-infers project name from the local git repogitlab_ssh_get_user_public_keys.sh
- fetches a given GitLab user's public SSH keys via the API, with identifying comments, for piping to~/.ssh/authorized_keys
or adjacent toolsgitlab_ssh_get_public_keys.sh
- fetches the currently authenticated GitLab user's public SSH keys via the APIgitlab_ssh_add_public_keys.sh
- uploads SSH keys from local files or standard input to the currently authenticated GitLab account. Specify pubkey files (default:~/.ssh/id_rsa.pub
) or read from standard input for piping from adjacent toolsgitlab_ssh_delete_public_keys.sh
- deletes given SSH keys from the currently authenticated GitLab account by key id or title regex matchgitlab_validate_ci_yaml.sh
- validates a.gitlab-ci.yml
file via the GitLab API
bitbucket/*.sh
- BitBucket API scripts:bitbucket_api.sh
- queries the BitBucket API. Can infer BitBucket user, repo and authentication token from local checkout or environment ($BITBUCKET_USER
,$BITBUCKET_TOKEN
)bitbucket_foreach_repo.sh
- executes a templated command for each BitBucket repo, replacing the{user}
and{repo}
in each iterationbitbucket_workspace_set_env_vars.sh
- adds / updates Bitbucket workspace-level environment variable(s) via the API fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)bitbucket_repo_set_env_vars.sh
- adds / updates Bitbucket repo-level environment variable(s) via the API fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)bitbucket_repo_set_description.sh
- sets the description for one or more repos using the BitBucket APIbitbucket_enable_pipelines.sh
- enables the CI/CD pipelines for all reposbitbucket_disable_pipelines.sh
- disables the CI/CD pipelines for all reposbitbucket_repo_enable_pipeline.sh
- enables the CI/CD pipeline for a given repobitbucket_repo_disable_pipeline.sh
- disables the CI/CD pipeline for a given repobitbucket_ssh_get_public_keys.sh
- fetches the currently authenticated BitBucket user's public SSH keys via the API for piping to~/.ssh/authorized_keys
or adjacent toolsbitbucket_ssh_add_public_keys.sh
- uploads SSH keys from local files or standard input to the currently authenticated BitBucket account. Specify pubkey files (default:~/.ssh/id_rsa.pub
) or read from standard input for piping from adjacent toolsbitbucket_ssh_delete_public_keys.sh
- uploads SSH keys from local files or standard input to the currently authenticated BitBucket account. Specify pubkey files (default:~/.ssh/id_rsa.pub
) or read from standard input for piping from adjacent tools
See also Knowledge Base notes for Git.
jenkins/
, terraform/
, teamcity/
, buildkite/
, circlci/
, travis/
, azure_devops/
, ..., cicd/
directories:
appveyor_api.sh
- queries AppVeyor's API with authenticationazure_devops/*.sh
- Azure DevOps scripts:azure_devops_api.sh
- queries Azure DevOps's API with authenticationazure_devops_foreach_repo.sh
- executes a templated command for each Azure DevOps repo, replacing{user}
,{org}
,{project}
and{repo}
in each iterationazure_devops_to_github_migration.sh
- migrates one or all Azure DevOps git repos to GitHub, including all branches and sets the default branch to match via the APIs to maintain the same checkout behaviourazure_devops_disable_repos.sh
- disables one or more given Azure DevOps repos (to prevent further pushes to them after migration to GitHub)
circleci/*.sh
- CircleCI scripts:circleci_api.sh
- queries CircleCI's API with authenticationcircleci_project_set_env_vars.sh
- adds / updates CircleCI project-level environment variable(s) via the API fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)circleci_context_set_env_vars.sh
- adds / updates CircleCI context-level environment variable(s) via the API fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)circleci_project_delete_env_vars.sh
- deletes CircleCI project-level environment variable(s) via the APIcircleci_context_delete_env_vars.sh
- deletes CircleCI context-level environment variable(s) via the APIcircleci_local_execute.sh
- installs CircleCI CLI and executes.circleci/config.yml
locallycircleci_public_ips.sh
- lists CircleCI public IP addresses via dnsjson.com
codeship_api.sh
- queries CodeShip's API with authenticationdrone_api.sh
- queries Drone.io's API with authenticationshippable_api.sh
- queries Shippable's API with authenticationwercker_app_api.sh
- queries Wercker's Applications API with authenticationgocd_api.sh
- queries GoCD's APIgocd.sh
- one-touch GoCD CI:- launches in Docker
- (re)creates config repo (
$PWD/setup/gocd_config_repo.json
) from which to source pipeline(s) (.gocd.yml
) - detects and enables agent(s) to start building
- call from any repo top level directory with a
.gocd.yml
config (all mine have it), mimicking structure of fully managed CI systems
concourse.sh
- one-touch Concourse CI:- launches in Docker
- configures pipeline from
$PWD/.concourse.yml
- triggers build
- tails results in terminal
- prints recent build statuses at end
- call from any repo top level directory with a
.concourse.yml
config (all mine have it), mimicking structure of fully managed CI systems
fly.sh
- shortens Concoursefly
command to not have to specify target all the timejenkins/*.sh
- Jenkins CI scripts:jenkins.sh
- one-touch Jenkins CI:- launches Docker container
- installs plugins
- validates
Jenkinsfile
- configures job from
$PWD/setup/jenkins-job.xml
- sets Pipeline to git remote origin's
Jenkinsfile
- triggers build
- tails results in terminal
- call from any repo top level directory with a
Jenkinsfile
pipeline andsetup/jenkins-job.xml
(all mine have it)
jenkins_api.sh
- queries the Jenkins Rest API, handles authentication, pre-fetches CSFR protection token crumb, supports many environment variables such as$JENKINS_URL
for ease of usejenkins_jobs.sh
- lists Jenkins jobs (pipelines)jenkins_foreach_job.sh
- runs a templated command for each Jenkins jobjenkins_jobs_download_configs.sh
- downloads all Jenkins job configs to xml files of the same namejenkins_job_config.sh
- gets or sets a Jenkins job's configjenkins_job_description.sh
- gets or sets a Jenkins job's descriptionjenkins_job_enable.sh
- enables a Jenkins job by namejenkins_job_disable.sh
- disables a Jenkins job by namejenkins_job_trigger.sh
- triggers a Jenkins job by namejenkins_job_trigger_with_params.sh
- triggers a Jenkins job with parameters which can be passed as--data KEY=VALUE
jenkins_jobs_enable.sh
- enables all Jenkins jobs/pipelines with names matching a given regexjenkins_jobs_disable.sh
- disables all Jenkins jobs/pipelines with names matching a given regexjenkins_builds.sh
- lists Jenkins latest builds for every jobjenkins_cred_add_cert.sh
- creates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_add_kubernetes_sa.sh
- creates a Jenkins Kubernetes service account credentialjenkins_cred_add_secret_file.sh
- creates a Jenkins secret file credential from a filejenkins_cred_add_secret_text.sh
- creates a Jenkins secret string credential from a string or a filejenkins_cred_add_ssh_key.sh
- creates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_add_user_pass.sh
- creates a Jenkins username/password credentialjenkins_cred_delete.sh
- deletes a given Jenkins credential by idjenkins_cred_list.sh
- lists Jenkins credentials IDs and Namesjenkins_cred_update_cert.sh
- updates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_update_kubernetes_sa.sh
- updates a Jenkins Kubernetes service account credentialjenkins_cred_update_secret_file.sh
- updates a Jenkins secret file credential from a filejenkins_cred_update_secret_text.sh
- updates a Jenkins secret string credential from a string or a filejenkins_cred_update_ssh_key.sh
- updates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_update_user_pass.sh
- updates a Jenkins username/password credentialjenkins_cred_set_cert.sh
- creates or updates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_set_kubernetes_sa.sh
- creates or updates a Jenkins Kubernetes service account credentialjenkins_cred_set_secret_file.sh
- creates or updates a Jenkins secret file credential from a filejenkins_cred_set_secret_text.sh
- creates or updates a Jenkins secret string credential from a string or a filejenkins_cred_set_ssh_key.sh
- creates or updates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_set_user_pass.sh
- creates or updates a Jenkins username/password credential
jenkins_cli.sh
- shortensjenkins-cli.jar
command by auto-inferring basic configuations, auto-downloading the CLI if absent, inferrings a bunch of Jenkins related variables like$JENKINS_URL
,$JENKINS_CLI_ARGS
and authentication using$JENKINS_USER
/$JENKINS_PASSWORD
, or finds admin password from inside local docker container. Used heavily byjenkins.sh
one-shot setup and the following scripts:jenkins_foreach_job_cli.sh
- runs a templated command for each Jenkins jobjenkins_create_job_parallel_test_runs.sh
- creates a freestyle parameterized test sleep job and launches N parallel runs of it to test scaling and parallelization of Jenkins on Kubernetes agentsjenkins_create_job_check_gcp_serviceaccount.sh
- creates a freestyle test job which runs a GCP Metadata query to determine the GCP serviceaccount the agent pod is operating under to check GKE Workload Identity integrationjenkins_jobs_download_configs_cli.sh
- downloads all Jenkins job configs to xml files of the same namejenkins_cred_cli_add_cert.sh
- creates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_cli_add_kubernetes_sa.sh
- creates a Jenkins Kubernetes service account credentialjenkins_cred_cli_add_secret_file.sh
- creates a Jenkins secret file credential from a filejenkins_cred_cli_add_secret_text.sh
- creates a Jenkins secret string credential from a string or a filejenkins_cred_cli_add_ssh_key.sh
- creates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_cli_add_user_pass.sh
- creates a Jenkins username/password credentialjenkins_cred_cli_delete.sh
- deletes a given Jenkins credential by idjenkins_cred_cli_list.sh
- lists Jenkins credentials IDs and Namesjenkins_cred_cli_update_cert.sh
- updates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_cli_update_kubernetes_sa.sh
- updates a Jenkins Kubernetes service account credentialjenkins_cred_cli_update_secret_file.sh
- updates a Jenkins secret file credential from a filejenkins_cred_cli_update_secret_text.sh
- updates a Jenkins secret string credential from a string or a filejenkins_cred_cli_update_ssh_key.sh
- updates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_cli_update_user_pass.sh
- updates a Jenkins username/password credentialjenkins_cred_cli_set_cert.sh
- creates or updates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_cli_set_kubernetes_sa.sh
- creates or updates a Jenkins Kubernetes service account credentialjenkins_cred_cli_set_secret_file.sh
- creates or updates a Jenkins secret file credential from a filejenkins_cred_cli_set_secret_text.sh
- creates or updates a Jenkins secret string credential from a string or a filejenkins_cred_cli_set_ssh_key.sh
- creates or updates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_cli_set_user_pass.sh
- creates or updates a Jenkins username/password credential
jenkins_password.sh
- gets Jenkins admin password from local docker container. Used byjenkins_cli.sh
jenkins_plugins_latest_versions.sh
- finds the latest versions of given Jenkins plugins. Useful to programmatically upgrade your Jenkins on Kubernetes plugins defined in values.yamlcheck_jenkinsfiles.sh
- validates all*Jenkinsfile*
files in the given directory trees using the online Jenkins validator- See also Knowledge Base notes for Jenkins.
teamcity/*.sh
- TeamCity CI scripts:teamcity.sh
- one-touch TeamCity CI cluster:- launches Docker containers with server and 1 agent
- click proceed and accept the EULA
- waits for server to initialize
- waits for agent to register
- authorizes agent
- creates a VCS Root if
$PWD
has a.teamcity.vcs.json
/.teamcity.vcs.ssh.json
/.teamcity.vcs.oauth.json
and corresponding$TEAMCITY_SSH_KEY
or$TEAMCITY_GITHUB_CLIENT_ID
+$TEAMCITY_GITHUB_CLIENT_SECRET
environment variables - creates a Project and imports all settings and builds from the VCS Root
- creates an admin user and an API token for you
- see also: TeamCity CI config repo for importing pipelines
teamcity_api.sh
- queries TeamCity's API, auto-handling authentication and other quirks of the APIteamcity_create_project.sh
- creates a TeamCity project using the APIteamcity_create_github_oauth_connection.sh
- creates a TeamCity GitHub OAuth VCS connection in the Root project, useful for bootstrapping projects from VCS configsteamcity_create_vcs_root.sh
- creates a TeamCity VCS root from a save configuration (XML or JSON), as downloaded byteamcity_export_vcs_roots.sh
teamcity_upload_ssh_key.sh
- uploads an SSH private key to a TeamCity project (for use in VCS root connections)teamcity_agents.sh
- lists TeamCity agents, their connected state, authorized state, whether enabled and up to dateteamcity_builds.sh
- lists the last 100 TeamCity builds along with the their state (eg.finished
) and status (eg.SUCCESS
/FAILURE
)teamcity_buildtypes.sh
- lists TeamCity buildTypes (pipelines) along with the their project and IDsteamcity_buildtype_create.sh
- creates a TeamCity buildType from a local JSON configuration (seeteamcity_buildtypes_download.sh
)teamcity_buildtype_set_description_from_github.sh
- sync's a TeamCity buildType's description from its Github repo descriptionteamcity_buildtypes_set_description_from_github.sh
- sync's all TeamCity buildType descriptions from their GitHub repos where availableteamcity_export.sh
- downloads TeamCity configs to local JSON files in per-project directories mimicking native TeamCity directory structure and file namingteamcity_export_project_config.sh
- downloads TeamCity project config to local JSON filesteamcity_export_buildtypes.sh
- downloads TeamCity buildType config to local JSON filesteamcity_export_vcs_roots.sh
- downloads TeamCity VCS root config to local JSON filesteamcity_projects.sh
- lists TeamCity project IDs and Namesteamcity_project_set_versioned_settings.sh
- configures a project to track all changes to a VCS (eg. GitHub)teamcity_project_vcs_versioning.sh
- quickly toggle VCS versioning on/off for a given TeamCity project (useful for testing without auto-committing)teamcity_vcs_roots.sh
- lists TeamCity VCS root IDs and Names
travis/*.sh
- Travis CI API scripts (one of my all-time favourite CI systems):travis_api.sh
- queries the Travis CI API with authentication using$TRAVIS_TOKEN
travis_repos.sh
- lists Travis CI repostravis_foreach_repo.sh
- executes a templated command against all Travis CI repostravis_repo_build.sh
- triggers a build for the given repotravis_repo_caches.sh
- lists caches for a given repotravis_repo_crons.sh
- lists crons for a given repotravis_repo_env_vars.sh
- lists environment variables for a given repotravis_repo_settings.sh
- lists settings for a given repotravis_repo_create_cron.sh
- creates a cron for a given repo and branchtravis_repo_delete_crons.sh
- deletes all crons for a given repotravis_repo_delete_caches.sh
- deletes all caches for a given repo (sometimes clears build problems)travis_delete_cron.sh
- deletes a Travis CI cron by IDtravis_repos_settings.sh
- lists settings for all repostravis_repos_caches.sh
- lists caches for all repostravis_repos_crons.sh
- lists crons for all repostravis_repos_create_cron.sh
- creates a cron for all repostravis_repos_delete_crons.sh
- deletes all crons for all repostravis_repos_delete_caches.sh
- deletes all caches for all repostravis_lint.sh
- lints a given.travis.yml
using the API
buildkite/*.sh
- BuildKite API scripts:buildkite_api.sh
- queries the BuildKite API, handling authentication using$BUILDKITE_TOKEN
buildkite_pipelines.sh
- list buildkite pipelines for your$BUILDKITE_ORGANIZATION
/$BUILDKITE_USER
buildkite_foreach_pipeline.sh
- executes a templated command for each Buildkite pipeline, replacing the{user}
and{pipeline}
in each iterationbuildkite_agent.sh
- runs a buildkite agent locally on Linux or Mac, or in Docker with choice of Linux distrosbuildkite_agents.sh
- lists the Buildkite agents connected along with their hostname, IP, started dated and agent detailsbuildkite_pipelines.sh
- lists Buildkite pipelinesbuildkite_create_pipeline.sh
- create a Buildkite pipeline from a JSON configuration (like frombuildkite_get_pipeline.sh
orbuildkite_save_pipelines.sh
)buildkite_get_pipeline.sh
- gets details for a specific Buildkite pipeline in JSON formatbuildkite_update_pipeline.sh
- updates a BuildKite pipeline from a configuration provided via stdin or from a file saved viabuildkite_get_pipeline.sh
buildkite_patch_pipeline.sh
- updates a BuildKite pipeline from a partial configuration provided as an arg, via stdin, or from a file saved viabuildkite_get_pipeline.sh
buildkite_pipeline_skip_settings.sh
- lists the skip intermediate build settings for one or more given BuildKite pipelinesbuildkite_pipeline_set_skip_settings.sh
- configures given or all BuildKite pipelines to skip intermediate builds and cancel running builds in favour of latest buildbuildkite_cancel_scheduled_builds.sh
- cancels BuildKite scheduled builds (to clear a backlog due to offline agents and just focus on new builds)buildkite_cancel_running_builds.sh
- cancels BuildKite running builds (to clear them and restart new later eg. after agent / environment change / fix)buildkite_pipeline_disable_forked_pull_requests.sh
- disables forked pull request builds on a BuildKite pipeline to protect your build environment from arbitrary code execution security vulnerabilitiesbuildkite_pipelines_vulnerable_forked_pull_requests.sh
- prints the status of each pipeline, should all return false, otherwise run the above script to close the vulnerabilitybuildkite_rebuild_cancelled_builds.sh
- triggers rebuilds of last N cancelled builds in current pipelinebuildkite_rebuild_failed_builds.sh
- triggers rebuilds of last N failed builds in current pipeline (eg. after agent restart / environment change / fix)buildkite_rebuild_all_pipelines_last_cancelled.sh
- triggers rebuilds of the last cancelled build in each pipeline in the organizationbuildkite_rebuild_all_pipelines_last_failed.sh
- triggers rebuilds of the last failed build in each pipeline in the organizationbuildkite_retry_jobs_dead_agents.sh
- triggers job retries where jobs failed due to killed agents, continuing builds from that point and replacing their false negative failed status with the real final status, slightly better than rebuilding entire jobs which happen under a new buildbuildkite_recreate_pipeline.sh
- recreates a pipeline to wipe out all stats (see url and badge caveats in--help
)buildkite_running_builds.sh
- lists running builds and the agent they're running onbuildkite_save_pipelines.sh
- saves all BuildKite pipelines in your$BUILDKITE_ORGANIZATION
to local JSON files in$PWD/.buildkite-pipelines/
buildkite_set_pipeline_description.sh
- sets the description of one or more pipelines using the BuildKite APIbuildkite_set_pipeline_description_from_github.sh
- sets a Buildkite pipeline description to match its source GitHub repobuildkite_sync_pipeline_descriptions_from_github.sh
- for all BuildKite pipelines sets each description to match its source GitHub repobuildkite_trigger.sh
- triggers BuildKite build job for a given pipelinebuildkite_trigger_all.sh
- same as above but for all pipelines
terraform_cloud_*.sh
- Terraform Cloud API scripts:terraform_cloud_api.sh
- queries the Cloudflare API, handling authentication using$TERRAFORM_TOKEN
terraform_cloud_ip_ranges.sh
- returns the list of IP ranges for Terraform Cloudterraform_cloud_organizations.sh
- lists Terraform Cloud organizationsterraform_cloud_workspaces.sh
- lists Terraform Cloud workspacesterraform_cloud_workspace_vars.sh
- lists Terraform Cloud workspace variablesterraform_cloud_workspace_set_vars.sh
- adds / updates Terraform workspace-level sensitive environment/terraform variable(s) via the API fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)terraform_cloud_workspace_delete_vars.sh
- deletes one or more Terraform workspace-level variablesterraform_cloud_varsets.sh
- lists Terraform Cloud variable setsterraform_cloud_varset_vars.sh
- lists Terraform Cloud variables in on or all variables sets for the given organizationterraform_cloud_varset_set_vars.sh
- adds / updates Terraform sensitive environment/terraform variable(s) in a given variable set via the API fromkey=value
or shell export format, as args or via stdin (eg. piped fromaws_csv_creds.sh
)terraform_cloud_varset_delete_vars.sh
- deletes one or more Terraform variables in a given variable set
terraform_*.sh
- Terraform scripts:terraform_gcs_backend_version.sh
- determines the Terraform state version from the tfstate file in a GCS bucket found in a local givenbackend.tf
terraform_gitlab_download_backend_variable.sh
- downloads backend.tf from a GitLab CI/CD variable to be able to quickly iterate plans locallyterraform_import.sh
- finds given resource type in./*.tf
code or Terraform plan output that are not in Terraform state and imports themterraform_import_aws_iam_users.sh
- parses Terraform plan output to import newaws_iam_user
additions into Terraform stateterraform_import_aws_iam_groups.sh
- parses Terraform plan output to import newaws_iam_group
additions into Terraform stateterraform_import_aws_iam_policies.sh
- parses Terraform plan output to import newaws_iam_policies
additions, resolves their ARNs and imports them into Terraform stateterraform_import_aws_sso_permission_sets.sh
- finds allaws_ssoadmin_permission_set
in./*.tf
code, resolves the ARNs and imports them to Terraform stateterraform_import_aws_sso_account_assignments.sh
- parses Terraform plan output to import newaws_ssoadmin_account_assignment
additions into Terraform stateterraform_import_aws_sso_managed_policy_attachments.sh
- parses Terraform plan output to import newaws_ssoadmin_account_assignment
additions into Terraform stateterraform_import_aws_sso_permission_set_inline_policies.sh
- parses Terraform plan output to import newaws_ssoadmin_permission_set_inline_policy
additions into Terraform stateterraform_import_github_repos.sh
- finds allgithub_repository
in./*.tf
code or Terraform plan output that are not in Terraform state and imports them. See alsogithub_repos_not_in_terraform.sh
terraform_import_github_team.sh
- imports a given GitHub team into a given Terraform state resource, by first querying the GitHub API for the team ID needed to import into Terraformterraform_import_github_teams.sh
- finds allgithub_team
in./*.tf
code or Terraform plan output that are not in Terraform state, then queries the GitHub API for their IDs and imports them. See alsogithub_teams_not_in_terraform.sh
terraform_import_github_team_repos.sh
- finds allgithub_team_repository
in Terraform plan that would be added, then queries the GitHub API for the repos and team IDs and if they both exist then imports them to Terraform stateterraform_resources.sh
- external program to get all resource ids and attribute for a given resource type to work around Terraform splat expression limitation (#19931)terraform_managed_resource_types.sh
- quick parse of what Terraform resource types are found in*.tf
files under the current or given directory tree. Useful to give you a quick glance of what services you are managing- See also Knowledge Base notes for Terraform.
checkov_resource_*.sh
- Checkov resource counts - useful to estimate Bridgecrew Cloud costs which are charged per resource:checkov_resource_count.sh
- counts the number of resources Checkov is scanning in the current or given directorycheckov_resource_count_all.sh
- counts the total number of resources Checkov is scanning across all given repo checkouts
octopus_api.sh
- queries the Octopus Deploy API
See also Knowledge Base notes for CI/CD.
ai/
and ipaas/
directories:
openai_api.sh
- queries the OpenAI (ChatGPT) API with authenticationmake_api.sh
- queries the Make.com API with authentication
bin/
, pingdom/
, terraform/
directories:
digital_ocean_api.sh
/doapi.sh
- queries the Digital Ocean API with authentication- see also the Digital Ocean CLI
doctl
(install/install_doctl.sh
)
- see also the Digital Ocean CLI
atlassian_ip_ranges.sh
- lists Atlassian's IPv4 and/or IPv6 cidr ranges via its APIcircleci_public_ips.sh
- lists CircleCI public IP addresses via dnsjson.comcloudflare_*.sh
- Cloudflare API queries and reports:cloudflare_api.sh
- queries the Cloudflare API with authenticationcloudflare_ip_ranges.sh
- lists Cloudflare's IPv4 and/or IPv6 cidr ranges via its APIcloudflare_custom_certificates.sh
- lists any custom SSL certificates in a given Cloudflare zone along with their status and expiry datecloudflare_dns_records.sh
- lists any Cloudflare DNS records for a zone, including the type and ttlcloudflare_dns_records_all_zones.sh
- same as above but for all zonescloudflare_dns_record_create.sh
- creates a DNS record in the given domaincloudflare_dns_record_update.sh
- updates a DNS record in the given domaincloudflare_dns_record_delete.sh
- deletes a DNS record in the given domaincloudflare_dns_record_details.sh
- lists the details for a DNS record in the given domain in JSON format for further pipe processingcloudflare_dnssec.sh
- lists the Cloudflare DNSSec status for all zonescloudflare_firewall_rules.sh
- lists Cloudflare Firewall rules, optionally with filter expressioncloudflare_firewall_access_rules.sh
- lists Cloudflare Firewall Access rules, optionally with filter expressioncloudflare_foreach_account.sh
- executes a templated command for each Cloudflare account, replacing the{account_id}
and{account_name}
in each iteration (useful for chaining withcloudflare_api.sh
)cloudflare_foreach_zone.sh
- executes a templated command for each Cloudflare zone, replacing the{zone_id}
and{zone_name}
in each iteration (useful for chaining withcloudflare_api.sh
, used by adjacentcloudflare_*_all_zones.sh
scripts)cloudflare_purge_cache.sh
- purges the entire Cloudflare cachecloudflare_ssl_verified.sh
- gets the Cloudflare zone SSL verification status for a given zonecloudflare_ssl_verified_all_zones.sh
- same as above for all zonescloudflare_zones.sh
- lists Cloudflare zone names and IDs (needed for writing Terraform Cloudflare code)
datadog_api.sh
- queries the DataDog API with authenticationgitguardian_api.sh
- queries the GitGuardian API with authenticationjira_api.sh
- queries Jira API with authenticationkong_api.sh
- queries the Kong API Gateway's Admin API, handling authentication if enabledtraefik_api.sh
- queries the Traefik API, handling authentication if enabledngrok_api.sh
- queries the NGrok API with authenticationpingdom_*.sh
- Pingdom API queries and reports for status, latency, average response times, latency averages by hour, SMS credits, outages periods and durations over the last year etc.pingdom_api.sh
- queries the Solarwinds Pingdom API with authenticationpingdom_foreach_check.sh
- executes a templated command against each Pingdom check, replacing the{check_id}
and{check_name}
in each iterationpingdom_checks.sh
- show all Pingdom checks, status and latenciespingdom_checks_outages.sh
/pingdom_checks_outages.sh
- show one or all Pingdom checks outage histories for the last yearpingdom_checks_average_response_times.sh
- shows the average response times for all Pingdom checks for the last weekpingdom_check_latency_by_hour.sh
/pingdom_checks_latency_by_hour.sh
- shows the average latency for one or all Pingdom checks broken down by hour of the day, over the last weekpingdom_sms_credits.sh
- gets the remaining number of Pingdom SMS credits
terraform_cloud_api.sh
- queries Terraform Cloud API with authenticationterraform_cloud_ip_ranges.sh
- returns the list of IP ranges for Terraform Cloud via the API, or optionally one or more of the ranges used by different functionswordpress.sh
- boots Wordpress in docker with a MySQL backend, and increases the upload_max_filesize to be able to restore a real world sized export backupwordpress_api.sh
- queries the Wordpress API with authenticationwordpress_posts_without_category_tags.sh
- checks posts (articles) for categories without corresponding tags and prints the posts and their missing tags
java/
directory:
java_show_classpath.sh
- shows Java classpaths, one per line, of currently running Java programsjvm_heaps*.sh
- show all your Java heap sizes for all running Java processes, and their total MB (for performance tuning and sizing)- Java Decompilers:
java_decompile_jar.sh
- decompiles a Java JAR in /tmp, finds the main class and runs a Java decompiler on its main .class file usingjd_gui.sh
jd_gui.sh
- runs Java Decompiler JD GUI, downloading its jar the first time if it's not already presentbytecode_viwer.sh
- runs Bytecode-Viewer GUI Java decompiler, downloading its jar the first time if it's not already presentcfr.sh
- runs CFR command line Java decompiler, downloading its jar the first time if it's not already presentprocyon.sh
- runs Procyon command line Java decompiler, downloading its jar the first time if it's not already present
See also Knowledge Base notes for Java and JVM Performance Tuning.
python/
directory:
python_compile.sh
- byte-compiles Python scripts and libraries into.pyo
optimized filespython_pip_install.sh
- bulk installs PyPI modules from mix of arguments / file lists / stdin, accounting for User vs System installs, root vs user sudo, VirtualEnvs / Anaconda / GitHub Workflows/ Google Cloud Shell, Mac vs Linux library paths, and ignore failure optionpython_pip_install_if_absent.sh
- installs PyPI modules not already in Python libary path (OS or pip installed) for faster installations only where OS packages are already providing some of the modules, reducing time and failure rates in CI buildspython_pip_install_for_script.sh
- installs PyPI modules for given script(s) if not already installed. Used for dynamic individual script dependency installation in the DevOps Python tools repopython_pip_reinstall_all_modules.sh
- reinstalls all PyPI modules which can fix some issuespythonpath.sh
- prints all Python libary search paths, one per linepython_find_library_path.sh
- finds directory where a PyPI module is installed - without args finds the Python library basepython_find_library_executable.sh
- finds directory where a PyPI module's CLI program is installed (system vs user, useful when it gets installed to a place that isn't in your$PATH
, wherewhich
won't help)python_find_unused_pip_modules.sh
- finds PyPI modules that aren't used by any programs in the current directory treepython_find_duplicate_pip_requirements.sh
- finds duplicate PyPI modules listed for install under the directory tree (useful for deduping module installs in a project and across submodules)python_translate_import_module.sh
- converts Python import modules to PyPI module names, used bypython_pip_install_for_script.sh
python_translate_module_to_import.sh
- converts PyPI module names to Python import names, used bypython_pip_install_if_absent.sh
andpython_find_unused_pip_modules.sh
python_pyinstaller.sh
- creates PyInstaller self-contained Python programs with Python interpreter and all PyPI modules includedpython_pypi_versions.sh
- prints all available versions of a given PyPi module using the API
See also Knowledge Base notes for Python.
perl/
directory:
perl_cpanm_install.sh
- bulk installs CPAN modules from mix of arguments / file lists / stdin, accounting for User vs System installs, root vs user sudo, Perlbrew / Google Cloud Shell environments, Mac vs Linux library paths, ignore failure option, auto finds and reads build failure log for quicker debugging showing root cause error in CI builds logs etcperl_cpanm_install_if_absent.sh
- installs CPAN modules not already in Perl libary path (OS or CPAN installed) for faster installations only where OS packages are already providing some of the modules, reducing time and failure rates in CI buildsperl_cpanm_reinstall_all.sh
- re-installs all CPAN modules. Useful for trying to recompile XS modules on Macs after migration assistant from an Intel Mac to an ARM Silicon Mac leaves your home XS libraries broken as they're built for the wrong architectureperlpath.sh
- prints all Perl libary search paths, one per lineperl_find_library_path.sh
- finds directory where a CPAN module is installed - without args finds the Perl library baseperl_find_library_executable.sh
- finds directory where a CPAN module's CLI program is installed (system vs user, useful when it gets installed to a place that isn't in your$PATH
, wherewhich
won't help)perl_find_unused_cpan_modules.sh
- finds CPAN modules that aren't used by any programs in the current directory treeperl_find_duplicate_cpan_requirements.sh
- finds duplicate CPAN modules listed for install more than once under the directory tree (useful for deduping module installs in a project and across submodules)perl_generate_fatpacks.sh
- creates Fatpacks - self-contained Perl programs with all CPAN modules built-in
See also Knowledge Base notes for Perl.
packages/
directory:
golang_install.sh
- bulk installs Golang modules from mix of arguments / file lists / stdingolang_install_if_absent.sh
- same as above but only if the package binary isn't already available in$PATH
golang_rm_binaries.sh
- deletes binaries of the same name adjacent to.go
files. Doesn't delete youbin/
etc as these are often real deployed applications rather than development binaries
media/
directory:
mp3_set_artist.sh
/mp3_set_album.sh
- sets the artist / album tag for all mp3 files under given directories. Useful for grouping artists/albums and audiobook author/books (eg. for correct importing into Mac's Books.app)mp3_set_track_name.sh
- sets the track name metadata for mp3 files under given directories to follow their filenames. Useful for correctly displaying audiobook progress / chapters etc.mp3_set_track_order.sh
- sets the track order metadata for mp3 files under given directories to follow the lexical file naming order. Useful for correctly ordering album songs and audiobook chapters (eg. for Mac's Books.app). Especially useful for enforcing global ordering on multi-CD audiobooks after grouping into a single audiobook usingmp3_set_album.sh
(otherwise default track numbers in each CD interleave in Mac's Books.app)avi_to_mp4.sh
- converts avi files to mp4 using ffmpeg. Useful to be able to play videos on devices like smart TVs that may not recognize newer codecs otherwisemkv_to_mp4.sh
- converts mkv files to mp4 using ffmpeg. Same use case as aboveyoutube_download_channel.sh
- downloads all videos from a given YouTube channel URL
See also Knowledge Base notes for MultiMedia.
40+ Spotify API scripts (used extensively to manage my Spotify-Playlists repo).
spotify/
directory:
spotify_playlists*.sh
- list playlists in either<id> <name>
or JSON formatspotify_playlist_tracks*.sh
- gets playlist contents as track URIs /Artists - Track
/ CSV format - useful for backups or exports between music systemsspotify_backup.sh
- backup all Spotify playlists as well as the ordered list of playlistsspotify_backup_playlist*.sh
- backup Spotify playlists to local files in both human readableArtist - Track
format and Spotify URI format for easy restores or adding to new playlistsspotify_search*.sh
- search Spotify's library for tracks / albums / artists getting results in human readable format, JSON, or URI formats for easy loading to Spotify playlistsspotify_release_year.sh
- searches for a given track or album and finds the original release yearspotify_uri_to_name.sh
- convert Spotify track / album / artist URIs to human readableArtist - Track
/ CSV format. Takes Spotify URIs, URL links or just IDs. Reads URIs from files or standard inputspotify_create_playlist.sh
- creates a Spotify playlist, either public or privatespotify_rename_playlist.sh
- renames a Spotify playlistspotify_set_playlists_public.sh
/spotify_set_playlists_private.sh
- sets one or more given Spotify playlists to public / privatespotify_add_to_playlist.sh
- adds tracks to a given playlist. Takes a playlist name or ID and Spotify URIs in any form from files or standard input. Can be combined with many other tools listed here which output Spotify URIs, or appended from other playlists. Can also be used to restore a spotify playlist from backupsspotify_delete_from_playlist.sh
- deletes tracks from a given playlist. Takes a playlist name or ID and Spotify URIs in any form from files or standard input, optionally prefixed with a track position to remove only specific occurrences (useful for removing duplicates from playlists)spotify_delete_from_playlist_if_in_other_playlists.sh
- deletes tracks from a given playlist if their URIs are found in the subsequently given playlistsspotify_delete_from_playlist_if_track_in_other_playlists.sh
- deletes tracks from a given playlist if their 'Artist - Track' name match are found in the subsequently given playlists (less accurate than exact URI deletion above)spotify_duplicate_uri_in_playlist.sh
- finds duplicate Spotify URIs in a given playlist (these are guaranteed exact duplicate matches), returns all but the first occurrence and optionally their track positions (zero-indexed to align with the Spotify API for easy chaining with other tools)spotify_duplicate_tracks_in_playlist.sh
- finds duplicate Spotify tracks in a given playlist (these are identialArtist - Track
name matches, which may be from different albums / singles)spotify_delete_duplicates_in_playlist.sh
- deletes duplicate Spotify URI tracks (identical) in a given playlist usingspotify_duplicate_uri_in_playlist.sh
andspotify_delete_from_playlist.sh
spotify_delete_duplicate_tracks_in_playlist.sh
- deletes duplicate Spotify tracks (name matched) in a given playlist usingspotify_duplicate_tracks_in_playlist.sh
andspotify_delete_from_playlist.sh
spotify_delete_any_duplicates_in_playlist.sh
- calls both of the above scripts to first get rid of duplicate URIs and then remove any other duplicates by track name matchesspotify_playlist_tracks_uri_in_year.sh
- finds track URIs in a playlist where their original release date is in a given year or decade (by regex match)spotify_playlist_uri_offset.sh
- finds the offset of a given track URI in a given playlist, useful to find positions to resume processing a large playlistspotify_top_artists*.sh
- lists your top artists in URI or human readable formatspotify_top_tracks*.sh
- lists top tracks in URI or human readable formatspotify_liked_tracks*.sh
- lists yourLiked Songs
in URI or human readable formatsspotify_liked_artists*.sh
- list artists fromLiked Songs
in URI or human readable formatsspotify_artists_followed*.sh
- lists all followed artists in URI or human readable formatsspotify_artist_tracks.sh
- gets all track URIs for a given artist, from both albums and single for chain loading to playlistsspotify_follow_artists.sh
- follows artists for the given URIs from files or standard inputspotify_follow_top_artists.sh
- follows all artists in your current Spotify top artists listspotify_follow_liked_artists.sh
- follows artists with N or more tracks in yourLiked Songs
spotify_set_tracks_uri_to_liked.sh
- sets a list of spotify track URIs to 'Liked' so they appear in theLiked Songs
playlist. Useful for marking all the tracks in your best playlists as favourite tracks, or for porting historicalStarred
tracks to the newerLiked Songs
spotify_foreach_playlist.sh
- executes a templated command against all playlists, replacing{playlist}
and{playlist_id}
in each iterationspotify_playlist_name_to_id.sh
/spotify_playlist_id_to_name.sh
- convert playlist names <=> IDsspotify_api_token.sh
- gets a Spotify authentication token using either Client Credentials or Authorization Code authentication flows, the latter being able to read/modify private user data, automatically used byspotify_api.sh
spotify_api.sh
- query any Spotify API endpoint with authentication, used by adjacent spotify scripts
bin/
, install/
, packages/
, setup/
directories:
- Linux / Mac systems administration scripts:
install/
- installation scripts for various OS packages (RPM, Deb, Apk) for various Linux distros (Redhat RHEL / CentOS / Fedora, Debian / Ubuntu, Alpine)- install if absent scripts for Python, Perl, Ruby, NodeJS and Golang packages - good for minimizing the number of source code installs by first running the OS install scripts and then only building modules which aren't already detected as installed (provided by system packages), speeding up builds and reducing the likelihood of compile failures
- install scripts for tarballs, Golang binaries, random 3rd party installers, Jython and build tools like Gradle and SBT for when Linux distros don't provide packaged versions or where the packaged versions are too old
packages/
- OS / Distro Package Management:install_packages.sh
- installs package lists from arguments, files or stdin on major linux distros and Mac, detecting the package manager and invoking the right install commands, withsudo
if not root. Works on RHEL / CentOS / Fedora, Debian / Ubuntu, Alpine, and Mac Homebrew. Leverages and supports all features of the distro / OS specific install scripts listed belowinstall_packages_if_absent.sh
- installs package lists if they're not already installed, saving time and minimizing install logs / CI logs, same support list as above- Redhat RHEL / CentOS:
yum_install_packages.sh
/yum_remove_packages.sh
- installs RPM lists from arguments, files or stdin. Handles Yum + Dnf behavioural differences, callssudo
if not root, auto-attempts variations of python/python2/python3 package names. Avoids yum slowness by checking if rpm is installed before attempting to install it, acceptsNO_FAIL=1
env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across RHEL/CentOS/Fedora versions)yum_install_packages_if_absent.sh
- installs RPMs only if not already installed and not a metapackage provided by other packages (eg.vim
metapackage provided byvim-enhanced
), saving time and minimizing install logs / CI logs, plus all the features ofyum_install_packages.sh
aboverpms_filter_installed.sh
/rpms_filter_not_installed.sh
- pipe filter packages that are / are not installed for easy script piping
- Debian / Ubuntu:
apt_install_packages.sh
/apt_remove_packages.sh
- installs Deb package lists from arguments, files or stdin. Auto callssudo
if not root, acceptsNO_FAIL=1
env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across Debian/Ubuntu distros/versions)apt_install_packages_if_absent.sh
- installs Deb packages only if not already installed, saving time and minimizing install logs / CI logs, plus all the features ofapt_install_packages.sh
aboveapt_wait.sh
- blocking wait on concurrent apt locks to avoid failures and continue when available, mimicking yum's waiting behaviour rather than error'ing outdebs_filter_installed.sh
/debs_filter_not_installed.sh
- pipe filter packages that are / are not installed for easy script piping
- Alpine:
apk_install_packages.sh
/apk_remove_packages.sh
- installs Alpine apk package lists from arguments, files or stdin. Auto callssudo
if not root, acceptsNO_FAIL=1
env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across Alpine versions)apk_install_packages_if_absent.sh
- installs Alpine apk packages only if not already installed, saving time and minimizing install logs / CI logs, plus all the features ofapk_install_packages.sh
aboveapk_filter_installed.sh
/apk_filter_not_installed.sh
- pipe filter packages that are / are not installed for easy script piping
- Mac:
brew_install_packages.sh
/brew_remove_packages.sh
- installs Mac Hombrew package lists from arguments, files or stdin. AcceptsNO_FAIL=1
env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across versions)brew_install_packages_if_absent.sh
- installs Mac Homebrew packages only if not already installed, saving time and minimizing install logs / CI logs, plus all the features ofbrew_install_packages.sh
abovebrew_filter_installed.sh
/brew_filter_not_installed.sh
- pipe filter packages that are / are not installed for easy script pipingbrew_package_owns.sh
- finds which brew package owns a given filename argument
- all builds across all my GitHub repos now
make system-packages
beforemake pip
/make cpan
to shorten how many packages need installing, reducing chances of build failures
bin/
, checks/
, cicd/
or language specific directories:
-
lint.sh
- lints one or more files, auto-determines the file types, parses lint headers and calls appropriate scripts and tools. Integrated with my custom.vimrc
-
run.sh
- runs one or more files, auto-determines the file types, any run or arg headers and executes each file using the appropriate script or CLI tool. Integrated with my custom.vimrc
-
check_*.sh
- extensive collection of generalized tests - these run against all my GitHub repos via CI. Some examples:-
Programming language linting:
-
Build System, Docker & CI linting:
-
Optional, only if you don't do the full make install
.
Install only OS system package dependencies and AWS CLI via Python Pip (doesn't symlink anything to $HOME
):
make
Adds sourcing to .bashrc
and .bash_profile
and symlinks dot config files to $HOME
(doesn't install OS system package dependencies):
make link
undo via
make unlink
Install only OS system package dependencies (doesn't include AWS CLI or Python packages):
make system-packages
Install AWS CLI:
make aws
Install Azure CLI:
make azure
Install GCP GCloud SDK (includes CLI):
make gcp
Install GCP GCloud Shell environment (sets up persistent OS packages and all home directory configs):
make gcp-shell
Install generically useful Python CLI tools and modules (includes AWS CLI, autopep8 etc):
make python
> make help
Usage:
Common Options:
make help show this message
make build installs all dependencies - OS packages and any language libraries via native tools eg. pip, cpanm, gem, go etc that are not available via OS packages
make build-retry retries 'make build' x 3 until success to try to mitigate temporary upstream repo failures triggering false alerts in CI systems
make ci prints env, then runs 'build-retry' for more resilient CI builds with debugging
make printenv prints environment variables, CPU cores, OS release, $PWD, Git branch, hashref etc. Useful for CI debugging
make system-packages installs OS packages only (detects OS via whichever package manager is available)
make test run tests
make clean removes compiled / generated files, downloaded tarballs, temporary files etc.
make submodules initialize and update submodules to the right release (done automatically by build / system-packages)
make init same as above, often useful to do in CI systems to get access to additional submodule provided targets such as 'make ci'
make cpan install any modules listed in any cpan-requirements.txt files if not already installed
make pip install any modules listed in any requirements.txt files if not already installed
make python-compile compile any python files found in the current directory and 1 level of subdirectory
make pycompile
make github open browser at github project
make readme open browser at github's README
make github-url print github url and copy to clipboard
make status open browser at Github CI Builds overview Status page for all projects
make ls print list of code files in project
make wc show counts of files and lines
Repo specific options:
make install builds all script dependencies, installs AWS CLI, symlinks all config files to $HOME and adds sourcing of bash profile
make link symlinks all config files to $HOME and adds sourcing of bash profile
make unlink removes all symlinks pointing to this repo's config files and removes the sourcing lines from .bashrc and .bash_profile
make python-desktop installs all Python Pip packages for desktop workstation listed in setup/pip-packages-desktop.txt
make perl-desktop installs all Perl CPAN packages for desktop workstation listed in setup/cpan-packages-desktop.txt
make ruby-desktop installs all Ruby Gem packages for desktop workstation listed in setup/gem-packages-desktop.txt
make golang-desktop installs all Golang packages for desktop workstation listed in setup/go-packages-desktop.txt
make nodejs-desktop installs all NodeJS packages for desktop workstation listed in setup/npm-packages-desktop.txt
make desktop installs all of the above + many desktop OS packages listed in setup/
make mac-desktop all of the above + installs a bunch of major common workstation software packages like Ansible, Terraform, MiniKube, MiniShift, SDKman, Travis CI, CCMenu, Parquet tools etc.
make linux-desktop
make ls-scripts print list of scripts in this project, ignoring code libraries in lib/ and .bash.d/
make github-cli installs GitHub CLI
make kubernetes installs Kubernetes kubectl and kustomize to ~/bin/
make terraform installs Terraform to ~/bin/
make vim installs Vundle and plugins
make tmux installs TMUX TPM and plugin for kubernetes context
make ccmenu installs and (re)configures CCMenu to watch this and all other major HariSekhon GitHub repos
make status open the Github Status page of all my repos build statuses across all CI platforms
make aws installs AWS CLI tools
make azure installs Azure CLI
make gcp installs Google Cloud SDK
make digital-ocean installs Digital Ocean CLI
make aws-shell sets up AWS Cloud Shell: installs core packages and links configs
(maintains itself across future Cloud Shells via .aws_customize_environment hook)
make gcp-shell sets up GCP Cloud Shell: installs core packages and links configs
(maintains itself across future Cloud Shells via .customize_environment hook)
make azure-shell sets up Azure Cloud Shell (limited compared to gcp-shell, doesn't install OS packages since there is no sudo)
Now exiting usage help with status code 3 to explicitly prevent silent build failures from stray 'help' arguments
make: *** [help] Error 3
(make help
exits with error code 3 like most of my programs to differentiate from build success to make sure a stray help
argument doesn't cause silent build failure with exit code 0)
The rest of my original source repos are here.
Pre-built Docker images are available on my DockerHub.