Copilot update: rate limits + fixes #190176
Replies: 5 comments
-
|
What would be helpful is if there were some sort of auto restart once the limit was removed or a long running process flag that would slow things down on the client based on current overall load on the server. I don't mind the rate limiting so much as the disruption it causes. |
Beta Was this translation helpful? Give feedback.
-
|
The recent changes in GitHub Student Copilot makes absolutely zero sense. You give us access to the actual capable models (Claude Opus, Sonnet) through the Student Pack. We integrate it into our workflows, we start relying on it to learn and build proper architecture... and then overnight, you silently restrict access to push everyone toward the paid premium tiers. If the goal is just to use students for an upsell funnel, you might as well completely kill your useless, nerfed Student Pack. Giving us a crippled tool that throws model_not_supported errors the second we need to do real work is just insulting. Typical Microsoft move. Give with one hand, take with the other. Do better. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the transparency and detailed breakdown — really appreciated. That said, I think the core frustration from users wasn’t just about the limits themselves, but how suddenly and broadly they were enforced. Since the fix applied system-level limits across all models, it disrupted even normal workflows, which made it feel unpredictable from a user standpoint. A couple of thoughts that might improve the experience going forward: Graceful degradation instead of hard blocking Overall, it’s good to see quick mitigation and acknowledgment. Hopefully, future changes can balance system protection with a smoother user experience. |
Beta Was this translation helpful? Give feedback.
-
|
Still experiencing rate-limits when using 5.4 and building apps. It occurs after some time of frequent use. |
Beta Was this translation helpful? Give feedback.
-
|
Switching agent models doesn't fully work: I just switched from Claude to GPT 5.3 for the agent run, and the agent now works, but the automatic review it starts as the last step is still rate limited and runs into 429. The agent also says it cannot control the LLM of the code-review tool. So no agent can complete its work properly. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey folks, given the large increase in Copilot users impacted by rate limits over the past several days, we wanted to provide a clear update on what happened and to acknowledge the impact and frustration this caused for many of you.
What happened
On Monday, March 16, we discovered a bug in our rate-limiting that had been undercounting tokens from newer models like Opus 4.6 and GPT-5.4. Fixing the bug restored limits to previously configured values, but due to the increased token usage intensity of these newer models, the fix mistakenly impacted many users with normal and expected usage patterns. On top of that, because these specific limits are designed for system protection, they blocked usage across all models and prevented users from continuing their work. We know this experience was extremely frustrating, and it does not reflect the Copilot experience we want to deliver.
Immediate mitigation
We increased these limits Wednesday evening PT and again Thursday morning PT for Pro+/Copilot Business/Copilot Enterprise, and Thursday afternoon PT for Pro. Our telemetry shows that limiting has returned to previous levels.
Looking forward
We’ll continue to monitor and adjust limits to minimize disruption while still protecting the integrity of our service. We want to ensure rate limits rarely impact normal users and their workflows. That said, growth and capacity are pushing us to introduce mechanisms to control demand for specific models and model families as we operate Copilot at scale across a large user-base. We’ve also started rolling out limits for specific models, with higher-tiered SKUs getting access to higher limits. When users hit these limits, they can switch to another model, use Auto (which isn't subject to these model limits), wait until the temporary limit window ends, or upgrade their plan.
We're also investing in UI improvements that give users clearer visibility into their usage as they approach these limits, so they aren't caught off guard.
We appreciate your patience and feedback this week. We’ve learned a lot and are committed to continuously making Copilot a better experience.
Beta Was this translation helpful? Give feedback.
All reactions