The diagram below is
gradually revealed in this video
Click here for narrative text below.
Different colors in the diagram represent "ownership" (who does what) within a particular organization. Other organizations will like use different colors to indicate status.
This was created in response to the need for a more nimble, comprehensive, yet lower cost approach to measure and thus improve the speed, capacity, and reliability of high-traffic web and native mobile apps, what many call "performance engineering".
BTW, If you see a typo that needs fixing or an idea that should be considered, please fork this repo, edit the file, and send us a pull request. Better yet, join us to revolutionalize the industry. Organizations not in the business of selling performance engineering software and services to the public, it’s natural for us to develop framework as open source on public github repos so we can share the benefits as well as development costs, but also to ensure continuity of skills and effort.
## Narrative of diagramCapabilities | Components |
---|---|
Our concern is the ability of servers running individually and within a cluster to | app server |
serve requests from internet browsers on desktops and | browser clients |
with mobile devices, | mobile clients |
coded with React-based libraries. | react |
Organization who use a continuous integration and deployment workflow | - |
make user of a source code repository (git or Subversion) to store not just programming source code text, but also | git |
script code that provision servers to work the same on local laptops as on servers in the cloud. | Docker |
Deployment scripts | Puppet or Chef |
attach agents on each server's virtual machine to | server agents |
obtain a stream of monitoring information (such as garbage collection frequency). | monitoring |
This activity may occur simultaneously on several temporary enviornments by Jenkins (or other continuous integraton orchestration tool) when a developer commits into a special branch in git that automatically triggers jobs to build servers and run test jobs on them. | Jenkins |
Builds are based on dependencies defined in Maven (or Gradle). | Maven |
Compiled outputs from Jenkins (such as .dll and .war files) are stored in a repository like the public Homebrew on Macs or Chocolatey on Windows. | Artifactory |
To predict what load a system might be really able to deliver, we artificially generate load on servers by programs taking the place of humans typing on browsers and mobile devices. | load gen |
The sequence and logic of actions are provided by emulation (JMeter) scripts that | JMeter scripts |
reference (sham) data generated to avoid using sensitive or personal identification information. | sham data |
To ensure code quality, static scans of source code (using perhaps SonarQube) are run according to rules defined by the team. | SonarQube |
During development, testing, and deployment, external vendor APIs are often used by apps to send email, SMS, get vendor inventory status, etc. | External APIs |
So we mock those dependency services (using Wiremock) to ensure constant access. This also enables us to artificially vary vendor response time to see their consequence in our system. | Wiremock |
The amount of time and how often each line of code was executed can be analyzed using profiler software. | profiler |
But that often requires repeatable execution of the same scenarios. So on desktop browsers, | Selenium Web Driver |
we use code that manipulate the browser UI like real people do, | Selenium code |
just as we control mobile native apps | Appium Driver |
using (Appium) mobile app test automation code. | Appium Code |
The timing of each manual action during functional test script execution can be captured and stored to automatically detect changes in response time. | timings |
The size and download time of each file can be optionally captured | BrowserMob Proxy |
into HTML archive files for detailed analysis. | HAR files |
Configuration settings controlling app servers, such as memory and thread limits, can limit the rate of processing. | configs |
So a program is needed to automate the planning, re-building, and running of servers with different mixes to identify that ideal set of configuration values that yield the most throughput at the least cost. | run variations |
Apps being built today make use of microservices REST APIs (application programming interfaces) for computers to communicate directly with other computers. | Internal APIs |
When these APIs are completely described in a standard format (called Swagger spec), | Swagger spec |
client code can be generated. Code generation is revolutionary in its potential for reducing cycle time. | Swagger codegen |
Automatic generation of load test scripts mean that micro-benchmarks or entire stress tests can be run as soon as a developer checks in code, without waiting for manual test script creation. | Jmeter gen |
Analysis of the impact from load imposed over time | loadtest logs |
need to be correlated to the same time context of logs from operating systems and app servers | app server logs |
as well as tracing from network captures. | Network trace |
Finding root causes requires all this sensor data to be collected and digested together. | Logstash |
Since there can be a large number of logs, intermediate servers (such as RabbitMQ) may be added. | Logstash setup |
Indexing data over time and various other dimensions | ElastiSearch |
enables us to filter and sort data creatively for insights. | Kibana |
One key insight is understanding why individual spikes or drops occur. This is why, instead of summarizing data, we need to export and archive detailed data, then import them back later | import-export |
so we can compare fine-grained data across several releases. | compare |
On top of live measurements we overlay objectives and targets (such as expected growth in transaction volume) as reference on graphs. | ref. data |
A big reason for load testing is to identify thresholds for action such as adding more servers or | thresholds |
sending alerts for human expert review. | alerts |
It takes a tremendous amount of attention and skill to be the "air traffic controller" to a complex mix of clustered servers. So we want to make use of recent advances in statistical data science and "Machine learning" which can scan the database to dynamically identify thresholds and issue alerts. | DS & machine learning |
And since people can be overwhelmed by too many alerts and emails, we also recommendations to helping us focus on the most effective action among the cascade of events. | schedule |
A. Eliminate errors in program coding source by automatic generation of programming code based on specs. Although various attempts at generating UI code have not taken hold due to complexity, generation of APIs is less complex of an undertaking.
B. Test immediately in the dev. lifecycle through automatic generation of test automation scripts and API mock scripts. Making changes easy, fast, and safe enables fix-fast which makes systems more "correct" than monolithic design.
C. Automatic alerts of slow execution speeds during automated functional testing discovered automatically by machine learning robots rather than tedious manual examination of logs.
D. Automatically cycle though variations of several configurations during a single manual run initiation. More important than hands-free, this enables performance analysis to go beyond merely testing to engineering.
The objective here is to reduce the amount of manual effort (and human errors) to conducting tests through automation.
E. Centralizing data from various views of system behavior under stress (or not) can be analyzed together will provide the basis for identifying trends and other insights using both manual and "machine learning" techniques. Machine learning would identify more minute issues more comprehensively.
Sending JMeter to the ELK stack means that within JMeter listeners are not needed. So Kibana would replace what Blazemeter displays.
## Authors Contact information for authors of this repo:Wilson Mar, @wilsonmar, https://www.linkedin.com/in/wilsonmar
wilsonmar at gmail, 310.320-7878
wilsonmar4 on Skype.
Anil Mainali, @mainalidfw, mainalidfw at gmail https://www.linkedin.com/in/anilmainali