Check GitHub Enterprise Cloud status by region:
Git Operations Operational
Visit www.githubstatus.com for more information Operational
Pull Requests Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Feb 3, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Update - Actions is operating normally.
Update - Based on our telemetry, most customers should see full recovery from failing GitHub Actions jobs on hosted runners.
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Update - Copilot is operating normally.
Update - Pages is operating normally.
Update - Our upstream provider has applied a mitigation to address queuing and job failures on hosted runners.
Update - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Update - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.
Update -
The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.
Update - Pages is experiencing degraded performance. We are continuing to investigate.
Update - The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.
Update - Actions is experiencing degraded availability. We are continuing to investigate.
Update - GitHub Actions hosted runners are experiencing high wait times across all labels. Self-hosted runners are not impacted.
Investigating - We are investigating reports of degraded performance for Actions
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Update - Codespaces is operating normally.
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Update - Codespaces is seeing steady recovery
Update - Users may see errors creating or resuming codespaces. We are investigating and will provide further updates as we have them.
Investigating - We are investigating reports of degraded availability for Codespaces
Feb 2, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Update - Dependabot is currently experiencing an issue that may cause scheduled update jobs to fail when creating pull requests.
Investigating - We are investigating reports of impacted performance for some GitHub services.
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Update - We’ve observed a low rate (~0.01%) of 5xx errors for HTTP-based fetches and clones. We’re currently routing traffic away from the affected location and are seeing recovery.
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 1, 2026
No incidents reported.
Jan 31, 2026
No incidents reported.
Jan 30, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Update - Customers may experience misreported Copilot Coding Agent tasks in the GitHub UI. Although the underlying actions are completing as requested, surfaces like Agent Sessions on the GitHub website, or Agent Hub in VS Code, will show that an agent is still working on a task, even if that work has completed.
Investigating - We are investigating reports of degraded performance for Actions
Jan 29, 2026
No incidents reported.
Jan 28, 2026
Resolved - On Jan 28, 2026, between 14:56 UTC and 15:44 UTC, GitHub Actions experienced degraded performance. During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes. The root cause was an atypical load pattern that overwhelmed system capacity and caused resource contention.
Update - Actions workflow run starts are delayed. We are actively investigating to find a mitigation.
Investigating - We are investigating reports of degraded performance for Actions
Jan 27, 2026
No incidents reported.
Jan 26, 2026
Resolved - On Jan 26, 2026, from approximately 14:03 UTC to 23:42 UTC, GitHub Actions experienced job failures on some Windows standard hosted runners. This was caused by a configuration difference in a new Windows runner type that caused the expected D: drive to be missing. About 2.5% of all Windows standard runners jobs were impacted. Re-run of failed workflows had a high chance of succeeding given the limited rollout of the change.
Update - At 23:45 UTC we applied a mitigation to take remaining impacted capacity offline and are seeing improvement. We will update again once we've confirmed the issue is resolved.
Update - Our investigation into GitHub Actions 4 Core Windows runner failures in public repositories is ongoing.
Update - We're continuing to investigate failures in GitHub Actions 4 Core Windows runners in public repositories.
Update - Rollback has been completed, but we are still seeing failures on about 11% of GitHub Actions runs on 4 Core Windows runners in public repositories.
Update - Mitigation for failing GitHub Actions jobs on 4-Core Windows runners is still being mitigated. You should start to see more runs succeeding.
Update - We've applied a mitigation to unblock running Actions. A regression occurred for Windows runners in public repositories which caused Actions workflows to fail. A mitigation is in place and customers should expect to see resolution soon.
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 25, 2026
Resolved - Between January 24, 2026,19:56 UTC and January 25, 2026, 2:50 UTC repository creation and clone were degraded. On average, the error rate was 25% and peaked at 55% of requests for repository creation. This was due to increased latency on the repositories database impacting a read-after-write problem during repo creation. We mitigated the incident by stopping an operation that was generating load on the database to increase throughput.
Update - The issue has been resolved. We will continue to monitor to ensure stability.
Update - Repo creation failure rate increased above 50%. We have mitigated the problem and are monitoring for recovery.
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 24, 2026
No incidents reported.
Jan 23, 2026
No incidents reported.
Jan 22, 2026
Resolved - On January 22, 2026, our authentication service experienced an issue between 14:00 UTC and 14:50 UTC, resulting in downstream disruptions for users.
Update - We have identified an issue in one of our services and have mitigated it. Services have recovered and we have a mitigation but we are working on a longer term solution.
Update - Issues is operating normally.
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 21, 2026
Resolved - On January 21, between 17:50 and 20:53 UTC, around 350 enterprises and organizations experienced slower load times or timeouts when viewing Copilot policy pages. The issue was traced to performance degradation under load due to an issue in upstream database caching capability within our billing infrastructure, which increased query latency to retrieve billing and policy information from approximately 300ms to up to 1.5s.
Update - We are rolling out a fix to reduce latency and timeouts on policy pages and are continuing to monitor impact.
Update - We are continuing to investigate latency and timeout issues affecting Copilot policy pages.
Update - We are investigating timeouts for customers visiting the Copilot policy pages for organizations and enterprises.
Investigating - We are investigating reports of impacted performance for some GitHub services.
Resolved - On Jan 21st, 2025, between 11:15 UTC and 13:00 UTC the Copilot service was degraded for Grok Code Fast 1 model. On average, more than 90% of the requests to this model failed due to an issue with an upstream provider. No other models were impacted.
Update - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Investigating - We are investigating reports of degraded performance for Copilot
Jan 20, 2026
Resolved - On January 20, 2026, between 19:08 UTC and 20:18 UTC, manually dispatched GitHub Actions workflows saw delayed job starts. GitHub products built on Actions such as Dependabot, Pages builds, and Copilot coding agent experienced similar delays. All jobs successfully completed despite the delays. At peak impact, approximately 23% of workflow runs were affected, with an average delay of 11 minutes.
Update - We are investigating delays in manually dispatched Actions workflows as well as other GitHub products which run on Actions. We have identified a fix and are working on mitigating the delays.
Investigating - We are investigating reports of degraded performance for Actions
Resolved - On January 20, 2026, between 14:39 UTC and 16:03 UTC, actions-runner-controller users experienced a 1% failure rate for API requests managing GitHub Actions runner scale sets. This caused delays in runner creation, resulting in delayed job starts for workflows targeting those runners. The root cause was a service to service circuit breaker that incorrectly tripped for all users when a single user hit rate limits for runner registration. The issue was mitigated by bypassing the circuit breaker, and users saw immediate and full service recovery following the fix.
Update - GitHub Actions customers that use actions-runner-controller are experiencing errors from our APIs that informs auto-scaling. We are investigating the issue and working on mitigating the impact.
Investigating - We are investigating reports of degraded performance for Actions