Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix panic when under pressure #3098

Merged
merged 25 commits into from
Aug 30, 2023
Merged

fix panic when under pressure #3098

merged 25 commits into from
Aug 30, 2023

Conversation

schoren
Copy link
Contributor

@schoren schoren commented Aug 23, 2023

This PR fixes the performace issues detected by running k6 tests

Changes

Fixes

Checklist

  • tested locally
  • added new dependencies
  • updated the docs
  • added a test

server/executor/queue_driver_postgres.go Show resolved Hide resolved
testing/load/load-test.js Outdated Show resolved Hide resolved
testing/load/run.bash Outdated Show resolved Hide resolved
testing/load/run.bash Show resolved Hide resolved
}

const enqueueTimeout = 500 * time.Millisecond
const enqueueTimeout = 5 * time.Minute
Copy link
Contributor

@danielbdias danielbdias Aug 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line owes me two days of life and one year of therapy. 🙃

Copy link
Contributor

@mathnogueira mathnogueira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small thing but it can panic the server

Comment on lines +30 to +31
subscribers := append(m.getSubscribers(resourceID), subscriber)
m.setSubscribers(resourceID, subscribers)
Copy link
Contributor

@mathnogueira mathnogueira Aug 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you need to lock the mutex here to prevent read-write race conditions here? Go will panic in case of read-write concurrency: https://stackoverflow.com/questions/36167200/how-safe-are-golang-maps-for-concurrent-read-write-operations

Same for PublishUpdate

Edit: reproduced the issue here (you might have to run it multiple times to see it fail): https://go.dev/play/p/h_AIcsi2J-Z

fatal error: concurrent map read and map write

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Originally each function that needed to access the map would manage the lock itself. That code was causing issues for some reason, so I moved the usage of the lock to the getSubscribers function and the setSubscribers function, so effectively every read and write is lock protected

Copy link
Contributor

@jorgeepc jorgeepc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Copy link
Contributor

@mathnogueira mathnogueira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm

@schoren schoren merged commit 1093d04 into main Aug 30, 2023
30 checks passed
@schoren schoren deleted the fix-pgueue-worker-leak branch August 30, 2023 00:38
@schoren schoren linked an issue Aug 30, 2023 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Problem with Tracetest running k6 load testing
4 participants