You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have previously gotten 16 warm-data-nodes (running on Kuber) all having the same resource (32Gb heap, 64Gb Ram, 3 Tb hdd).
Since it is not much recommended to use a distributed storage system for Elasticsearch, we have decided to use local disks instead.
As we were running on kuber and it provided us several advantages, we were looking for a kuber storage to have a neer-local disk performance and we have found the OpenEbs(Local PV: LVM) technology to do so.
The new cluster has 10 warm nodes each having the same Ram and CPU as before, with 8 Tb of hdd storage, which we expected to at least provide tha same performace and latency as our previous cluster does.
Despite the fact that reducing the number of nodes may reduce perfomance a bit, though we think that this is not the real problem, and the root cause is the openEbs itself.
Current Behavior
We have encountered about 10x higher latency (0.2 ms -> 2ms) and 10X higher took time than our previous cluster.
Closing this issue as there hasn't been any activity here, and there is a duplicate open which will be kept open for some more time expecting comments from reporter.
Description
We have previously gotten 16 warm-data-nodes (running on Kuber) all having the same resource (32Gb heap, 64Gb Ram, 3 Tb hdd).
Since it is not much recommended to use a distributed storage system for Elasticsearch, we have decided to use local disks instead.
As we were running on kuber and it provided us several advantages, we were looking for a kuber storage to have a neer-local disk performance and we have found the OpenEbs(Local PV: LVM) technology to do so.
I have also asked the problem from elastic community, and it seems that they also think the problem should be somewhere else.
Expected Behavior
The new cluster has 10 warm nodes each having the same Ram and CPU as before, with 8 Tb of hdd storage, which we expected to at least provide tha same performace and latency as our previous cluster does.
Despite the fact that reducing the number of nodes may reduce perfomance a bit, though we think that this is not the real problem, and the root cause is the openEbs itself.
Current Behavior
We have encountered about 10x higher latency (0.2 ms -> 2ms) and 10X higher took time than our previous cluster.
Your Environment
The text was updated successfully, but these errors were encountered: