Bug 59882 : JMeter memory allocations reduction#217
Closed
benbenw wants to merge 13 commits intoapache:trunkfrom
benbenw:memory-allocation1
Closed
Bug 59882 : JMeter memory allocations reduction#217benbenw wants to merge 13 commits intoapache:trunkfrom benbenw:memory-allocation1
benbenw wants to merge 13 commits intoapache:trunkfrom
benbenw:memory-allocation1
Conversation
use a specialized version of ByteArrayOutputStream which return its internal buffer when possible, this remove an allocation of byte[response size] when the content length is known and respected. do not create a buffer + copy when generating a md5 signature lazily allocate the ByteArrayOutputStream buffer which is not needed when it's an empty response
try to give a better default size to the stringbuilder use httpclient CharArrayBuffer to avoid memory allocations
<stringProp name="HTTPSampler.port"></stringProp> this makes getPortIfSpecified throws a "silent" exception this is bad as it's slow and allocates memory for the trace
It's always >5% of the memory allocated during my tests
arguments. construct the buffer with a better initial size to avoid re-allocations
provides a fast path which does not allocate memory
asfgit
pushed a commit
that referenced
this pull request
Jul 31, 2016
Contributed by Benoit Wiart (b.wiart at ubik-ingenierie.com) Part of PR #217 on github. git-svn-id: https://svn.apache.org/repos/asf/jmeter/trunk@1754660 13f79535-47bb-0310-9956-ffa450edef68
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
I've been profiling jmeter memory allocation under different conditions
the PR linked to this bug gives substantial improvements.
Gain really depends on the test but in the worst case (compressed content, no embedded resources) it's about 15% less allocations (in bytes)
When not compressed (no embedded resources) and if content-length is sent by the server my test gives 30% reduction in bytes allocated
As always YMMV but this PR is always a win in all my different tests.
Reduction in memory allocated comes from
Better usage of ByteArrayOutputStream by
Avoid unneeded string concatenation due to missing log.isDebugEnabled()
if possible use httpclient CharArrayBuffer to avoid memory allocations
Do not create unneeded silent exceptions
Do not allocate 2 empty iterators on each call to executeSamplePackage
Set better default size to StringBuilder buffers
improve JOrphanUtils#replaceAllChars : provides a fast path which does not allocate memory
etc..
Remark :
When using ssl, jsse is a major source of unneeded allocation !!!! (sun.security.ssl.SSLContextImpl#getDefaultCipherSuiteList(boolean))
https://bugs.openjdk.java.net/browse/JDK-8133070 should improve the situation (I did not test it)
the title of the bug is misleading as it's also about an internal cache beeing cleared on each method call.