-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic tags applied like health checks. #1048
Comments
Interesting idea. I think the work-around you mentioned is a decent way of doing this, but I'm going to leave this open as a thought ticket for now. Thanks! |
@fidian with respect to your statement: "On boot they will query for mongodb.service.consul and join the cluster." Can you describe this a bit more, since I want to setup something similar for a redis cluster. Do you use some handcrafted script (e.g., via consul-tenplate or the REST API) for querying for mongodb.service.consul to get all registered nodes for that service or are you relying on the DNS mechanism for that? At least one problem with solely relying on the DNS mechanism is, that if the node registeres itself (e.g., with registrator) within the consul cluster before it does the DNS lookup for mongodb.service.consul it might get back its own IP address, which would not be helpful to join the cluster... :-) |
This would useful for services like zookeeper which dynamically elects a leader node among themselves every time a node joins or leaves the cluster and the leader has the setting on so that it no longer accepts client connections. Having dynamic tags like this via check would make so I could query consul for the non-leader nodes and not have a client trying to connect to the leader at all. |
@Kosta-Github asked how I manage to auto cluster my mongo instances.
The only snag is that I must start one instance of mongo initially so it will bootstrap the replica set. Once it is running I am able to add and remove instances to my replica set. |
@fidian thanks for the explanation; just one more question: how does your |
@Kosta-Github it looks like the following. I'd also answer questions off this issue. Feel free to email me directly at [email protected] so we don't continue to pollute this thread.
|
+1 for this feature request |
+1 |
4 similar comments
+1 |
+1 |
+1 |
+1 |
This would be very very nice. There are all kinds of things for which clients need to connect the mast expliclty. A dynamic tag would be so elegant. So much better then a bunch of add scripts to tweak tags. |
+1 |
1 similar comment
+1 |
+1 tag plus script would be very usful to implement custom DNS response logic |
Currently have to run two 'services' for a similar situation, have a "redis" service which includes all nodes in the cluster, then a "redis-master" service This has the unfortunate side-effect of meaning most of the redis nodes are always 'failing' the health check because theyre not the master.. Would definitely appreciate this feature as a way around this |
Consul 0.6 added a "tag override" feature that's useful for implementing schemes like this, though the logic is run outside of Consul, not from Consul itself as suggested here. Here's the issue that brought it in #1102. Here's a bit of the documentation, from https://www.consul.io/docs/agent/services.html:
This would let an external agent like a script working with redis-sentinel to apply the tags to the current master via Consul's catalog API. |
+1 Would love to see this instead of the workaround with tag overriding. |
+1 |
1 similar comment
+1 |
This is brilliant idea :), I would also want this for redis cluster! |
+1 This would give us the ability to determine which application version should receive LB traffic in marathon. |
+1 |
1 similar comment
+1 |
Hi,
|
This feature would be great for my use case. I would really like to see this merged in eventually. |
+1 Consul DNS even with the two service method takes 15 to 30 minutes to propagate in the UI, API, and DNS. |
@Sieabah that sounds like a function of DNS caching some place - you can adjust the TTL value to maybe improve that. The API/UI shouldn't have any delay. |
@slackpad I have all of the DNS caching set to 0. Querying the API and ignoring the DNS takes about the same amount of resolution. I'm sure there is something misconfigured as when I monitor the two boxes they're saying "synced service:mongo" and "synced service:primary-mongo". With the current service definition I'm able to get it to 5 minutes. During that time both services actually say they're the primary (in the UI and API) even when in the logs they switch immediately.
I've tried both reregistering via the API, reloading the config during the health check, reloading from the API. I don't know what is making it take 5 minutes to propagate to a cluster of 3 server and 2 client other than the anti-entropy timeout of syncing only every 1 minute? |
👍 this would simplify a lot of "workaround" we did to achieve this functionality to have master/slave tags |
Did we get anywhere with this? I'm looking for something similar at the moment where I have a service that has a master / slave type setup. |
Not sure if Prepared Queries can be used to apply such rules. However, dynamic tagging is a good idea. Any plans to get it in ? |
This still looks like a great idea, but I see no indication of any traction to having it merged. Anyone care to give us an update? |
|
This would be useful for us as well. What are your thoughts on how to design this ? They are a few points to address tho :
|
@Aestek I think this kind of stuff would be, I agree really useful. For now we have lots of services such as:
Having a way to merge those services in 1 single service and just add a tag leader would be great. I know several systems where checks for this kind of features can also be simple HTTP checks, so limiting it to scripts is a bit less interesting.
The #1048 (comment) looks like a sensible approach (I mean, not linked to existing checks), because:
I did not check in details what has been done in https://github.com/avdva/consul/tree/dynamic-tags but it sounds to me like the right approach. While limiting the ability to have very dynamic things, it would greatly ease implementation (by avoiding conflicts on several checks most notably) |
I'll try to resurrect my branch soon, Will see, if it still works. |
@avdva we are really interested by this, tell us when you do so ;) |
@avdva Did you have time to resurrect your branch ? Hope it's not too complicated with all the conflict their must be since 2016 |
+1 |
This is an interesting idea and I could imagine us adding such feature. The best way to get it in is to create a PR so that we have something to discuss. That would also make it easier to see the impact. |
+1. I think @ShimmerGlass's suggestion is great, the tag should come from the script itself. This covers OP's use case but would solve additional ones. |
+1 |
1 similar comment
+1 |
I'll throw my 2 cents in (in 2021) and say that this feature would still be very much welcome because it solves at least 4 of my setups that are now relying on a bunch of external scripts to update tags. |
How does |
* Bump eks cleanup token duration to 3h from 1h * Add slack notifications for failed cleanups
Yes, Vault re-registers the service in order to update any parameters that changed since the last registration. See the |
We are really interested by this, any chance to see this feature in future ? |
I cant scrutinize why the hell this feature missing for years while it's even been half-implemented already. |
Came across this issue while we were trying to implement similar solution from using other means. Will be a good to have enhancement for sure!! |
This issue has now been open for almost a decade.Can someone from hashicorp please indicate whether this is actually on anyone's radar for planning? It is the luke-warmest of feedback to give this issue the "thinking" tag after being open for a week and then not do ANYTHING with the request for 9 years even with people actually suggesting actual implementations. There is basically no feedback about why or why not this may be considered or rejected from inclusion in the project. with nearly 50 participants in the issue you'd think that hashicorp could managed to at least have some meaningful engagement with the open source community. I'm gonna tag some recent contributors with merge/commit access as well as some execs to maybe prompt some actual engagement with this issue from anyone @sarahalsmiller @rboyer @xwa153 @dhiaayachi @jmurret @Amier3 @armon |
In issue #867 I suggested an idea to make tags that depend on the result of scripts, just like health checks.
I run mongo in the cloud with multiple machines all spun up from the same image. On boot they will query for mongodb.service.consul and join the cluster. That all works flawlessly. In being a good Ops person I have a cron job that will kill random machines in my infrastructure at random times. It will eventually hit the mongodb master, the system will hiccup and a slave will be promoted automatically. Life is fantastic.
In comes Legacy Software that must connect directly to the master mongodb instance. I would like to have master.mongodb.service.consul resolve to the one IP of the master in the cluster.
Current solution (runs via cron on all machines):
Ideal solution:
Sample JSON (one static tag, one dynamic tag):
This sort of solution could apply to issue #155 and #867, and possibly other.
The text was updated successfully, but these errors were encountered: