Skip to content

Commit

Permalink
Release 0.31.0-alpha
Browse files Browse the repository at this point in the history
  • Loading branch information
thomaspeitz committed Sep 23, 2021
1 parent 3a6e620 commit 41957eb
Show file tree
Hide file tree
Showing 2 changed files with 24 additions and 5 deletions.
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,23 @@
# 0.31.0-alpha
* [BREAKING] Decoupled scraping is now default. Removed code which allowed to use scraper without it.
```
# Those flags are just ignored
-decoupled-scraping=false
-decoupled-scraping=true
```
* [BREAKING] Small timeframes of scraping can be used again now. In the past yace decided the scraping
interval based on config. This magic was removed for simplicity.
```
# In the past this would have in some cases still set --scraping-interval 600
--scraping-interval 10
# Now it really would scrape every 10 seconds which could introduce big API costs. So please watch
# your API requests!
--scraping-interval 10
```
* Fix problems with start/endtime of scrapes (klarrio-dlamb)
* Add support for Database Migration Service metrics
* Allow to hotreload config via /reload (antoniomerlin)

# 0.30.1-alpha
* *SECURITY* Fix issue with building binaries. Please update to mitigate (https://nvd.nist.gov/vuln/detail/CVE-2020-14039)
* Thanks jeason81 for reporting this security incident!
Expand Down
9 changes: 4 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -486,12 +486,11 @@ The flags 'cloudwatch-concurrency' and 'tag-concurrency' define the number of co
Setting a higher value makes faster scraping times but can incur in throttling and the blocking of the API.

### Decoupled scraping
The flag 'decoupled-scraping' makes the exporter to scrape Cloudwatch metrics in background in fixed intervals, in stead of each time that the '/metrics' endpoint is fetched. This protects from the abuse of API requests that can cause extra billing in AWS account. This flag is activated by default.
The exporter scraped cloudwatch metrics in the background in fixed interval.
This protects from the abuse of API requests that can cause extra billing in AWS account.

If the flag 'decoupled-scraping' is activated, the flag 'scraping-interval' defines the seconds between scrapes. Its default value is 300.

### Config reloading
Use a POST request to /reload to reload the config and reset the session cache.
The flag 'scraping-interval' defines the seconds between scrapes.
The default value is 300.

## Troubleshooting / Debugging

Expand Down

0 comments on commit 41957eb

Please sign in to comment.