-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[e2e failure] Cluster level logging implemented by Stackdriver should ingest logs #52433
Comments
[MILESTONENOTIFIER] Milestone Labels Complete Issue label settings:
|
100k chars line is split into smaller chunks on export. There's something broken in the logging mechanism. I'm investigating /cc @Random-Liu @dashpole |
@dchen1107 Docker splits log messages of length 100k into messages of 16k. Is this expected? |
(Note this isn't blocking the submit queue just the e2e tests, but we still need to get this green for the release) |
Reproducible on GCP on COS in head, in versions 1.6.9 and 1.7.5, suspect it might be related to the COS image |
This seems to be the correct behavior now. This comment: moby/moby#34620 (comment) seems to indicate this was a recent change. I suspect that this changed when we upgraded from docker 1.11 to 1.12. |
That is a problem that is going to break a lot of people. Is there a way to turn this off? /cc @fgrzadkowski @piosz |
@tagomoris @repeatedly @edsiper Could you please help? Is there a plugin already to recombine docker output? |
/cc @igorpeshansky. |
@dashpole the 16k split behavior was introduced in Docker 1.13.0 via moby/moby#22982 @crassirostris the only log drivers that currently handle the partial flag are journald and jsonfile. The behavior of the jsonfile driver is to only add a \n to the end of log message if the partial flag is not set on the message. https://github.com/moby/moby/blob/master/daemon/logger/jsonfilelog/jsonfilelog.go#L121 |
Thanks for the correction @nickperry, we did just move to 1.13 (not 1.12) |
OK, since this is not a problem with Kuberentes per se, I'll move the offending test out of the blocking suite, while working on the resolution of this problem @nickperry Thanks for clarification |
https://storage.googleapis.com/k8s-gubernator/triage/index.html?text=AAAAAAAAAAAAAA to see the full list of jobs this hits |
Automatic merge from submit-queue [fluentd-gcp addon] Remove some e2e tests out of blocking suites Fixes #52433 Some Stackdriver Logging e2e tests are broken in release-blocking suites: - Due to the change in Docker 1.13, on some systems logs are automatically split by 16K chunks. This PR removes an e2e test that assumes otherwise - In large clusters, it's not possible to ingest system logs from all nodes Since it's not a Kubernetes problem per se, mitigating this by removing these tests from blocking suites.
@kubernetes/sig-instrumentation-test-failures
@kubernetes/kubernetes-release-managers
test has started to fail on a several e2e tests:
https://k8s-testgrid.appspot.com/release-master-blocking#gci-gce
https://k8s-testgrid.appspot.com/release-master-blocking#gke
Example failure https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/14136
#52289 seems suspect cc @crassirostris
The text was updated successfully, but these errors were encountered: