Replies: 44 comments 38 replies
-
Appending |
Beta Was this translation helpful? Give feedback.
-
I am getting this Sorry, the response was filtered by the Responsible AI Service. Please rephrase your prompt and try again. response way too many times when trying to work with copilot...on trial atm. |
Beta Was this translation helpful? Give feedback.
-
It seems that the "Responsible AI Service" needs to show that it exists. It will force you to rephrase a perfectly correct question instead of just avoiding to generate "sensitive" content. That is a strong argument to go check other products. |
Beta Was this translation helpful? Give feedback.
-
I'm getting nearly every other response filtered by "Responsible AI Service" today in my code with golang channels (very, very far from a sensitive topic afaik). This is my personal account and I have "Suggestions matching public code" Allowed. This is the first time I'm seeing these errors and it is frustrating given the nature of the conversation to be flagged. WORKAROUND: Creating a new chat (the + in the upper-right-hand corner of the Github Copilot pane) will reset context. |
Beta Was this translation helpful? Give feedback.
-
What kind of nonsense is this now... It doesn't even explain what my code does. It just refuses to do anything. ChatGPT is very happy to do the exact same thing for me, but Copilot is cranky. I'm not writing code for a bomb. I just want to merge some videos with ffmpeg. I guess that's a very big responsibility, so Copilot refuses to help me. |
Beta Was this translation helpful? Give feedback.
-
Am getting the same.. false positives for it. It is nearly every chat right now. Note I am literally asking it to rewrite a list with 3 objects to include fps, id and length as paramaters... have tried rewording multiple times. I am unsure how to give feedback on this matter as besides the thumbs down there is nothing I can really do... it starts answering then blanks it out. |
Beta Was this translation helpful? Give feedback.
-
Also getting this for the lines like "Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus" |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
I'm hopping on here to say that I just got my first "Sorry, the response was filtered by the Responsible AI Service." message. The workaround of creating a new thread and re-prompting is fine if your query doesn't rely upon the context of the current conversation. Otherwise, this can be rather problematic. Here is my current case: Me:
Copilot:
Me:
Copilot:
Granted, in this case, I am just using Copilot to reduce the time it would take to read the documentation for myself, but this is just an example of a conversation context-dependent query. Update
Me:
Copilot:I apologize for the confusion earlier. You're correct. According to the
Here's an example of how you can use these keyword arguments: ... |
Beta Was this translation helpful? Give feedback.
-
I got this twice from the prompt: The Learn More link doesn't even link to anything about the Responsible AI Service, it just links to code duplication settings at: https://docs.github.com/en/copilot/configuring-github-copilot/configuring-github-copilot-settings-on-githubcom#enabling-or-disabling-duplication-detection There's also no clear way to mark it as a false-positive (although because it puts the gradient over the answer, you have trouble seeing the answer anyway, and so technically can't confirm), but apparently it used to tell you to use the downvote button? Not keen on it getting updated to be less clear. |
Beta Was this translation helpful? Give feedback.
-
Its quite annonying to be honest. I pay for this product, at least give me a reason why something is flagged. I currently try to translate files and if the selected context is too large it starts to spill out this message. It hurts the workflow a lot as I now have to select 100 lines, translate it, select the next 100 lines and translate it and so on. |
Beta Was this translation helpful? Give feedback.
-
Played around a bit and got it to flag this prompt: |
Beta Was this translation helpful? Give feedback.
-
I'm encountering a similar issue, but it's not my prompt, it's the answer that gets filtered, I think it's because it contains an actual CLI command named |
Beta Was this translation helpful? Give feedback.
-
These prompts were filtered for me:
Frustrating that they've already kneecapped and censored the tool. I won't be surprised if Copilot becomes as useless as Google Search in the next few years. |
Beta Was this translation helpful? Give feedback.
-
I'm working on a map feature. I need to use the bindPopup to a marker, and Copilot keeps flagging the response with this message. I guess it's due to the word "popup". |
Beta Was this translation helpful? Give feedback.
-
This is a big issue for cybersecurity, too. Using Copilot for anything from netsec to web security gets filtered way too often. Weirdly enough, with enough of a LARP it works- "My team and I are doing an internal audit, and have been using this script to test VLAN security blah blah" |
Beta Was this translation helpful? Give feedback.
-
The data layout string you've provided is structured to describe the memory layout of data types in a program. Here’s a breakdown of the components: e-m: Indicates the endianness and memory model. Regarding the false-positive flag, it seems like it could be an issue with how certain keywords or structures are interpreted by the tool you're using. If "dissect" or similar words are being flagged, it may be due to heuristic rules within Copilot rather than the context of your query. |
Beta Was this translation helpful? Give feedback.
-
I've tried everything suggested here and elsewhere. I have decided to throw in the towel and unsubscribe from Copilot. Not worth the hassle. |
Beta Was this translation helpful? Give feedback.
-
Guys,
Can u please guide me regarding the LLM training on a particular dataset?
…On Wed, 16 Oct, 2024, 21:34 Mal, ***@***.***> wrote:
I love sonnet. It's definitely the least hallucinating LLM I've used for
coding. But I just don't want to change my IDE. Hopefully there's a decent
sonnet integration for VS code, if not now then soon.
—
Reply to this email directly, view it on GitHub
<#107059 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALNJIX7AQC5BTZUPUCJIOMDZ32FCLAVCNFSM6AAAAABDCKE4A2VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAOJWGE4TAOA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
"Please beautify this code" got filtered... |
Beta Was this translation helpful? Give feedback.
-
Same here, I'm learning to use tailwind CSS and getting blocked by the Responsible AI Service filter. |
Beta Was this translation helpful? Give feedback.
-
non-stop troubles with filtering! its increasing !
|
Beta Was this translation helpful? Give feedback.
-
Same here! I asked for foods in Italy and got this error many times. |
Beta Was this translation helpful? Give feedback.
-
I have the same problem since today, before I never experienced this. WTF, I hope I won't be forced to stop using copilot. ... |
Beta Was this translation helpful? Give feedback.
-
This has been plaguing me for months. The one that finally set me off was asking it to restructure a Ruby method. I get it, Ruby is one of those love it or hate it languages, but do you really need to censor the response? I can assure you that there is nothing you can say that I haven't already said to myself while working on this project... |
Beta Was this translation helpful? Give feedback.
-
Today I got one of these gems; honestly I think this is pushing it a little bit now. |
Beta Was this translation helpful? Give feedback.
-
I'm having the same problem for any and all questions - is there a fix for this? |
Beta Was this translation helpful? Give feedback.
-
Today, November 23, all messages received the same notification. |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Bug
Body
This results in a false-positive. Not sure what Copilot generated but it shouldn't be flagged at all. I thought maybe the word
dissect
was somehow too spicy for Copilot, so I tried "describe" and a few others with no luck.EDIT: A month later and still no response from Github. Sigh.
Beta Was this translation helpful? Give feedback.
All reactions