GitHub header
All Systems Operational
Git Operations ? Operational
API Requests ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Aug 3, 2024

No incidents reported today.

Aug 2, 2024

No incidents reported.

Aug 1, 2024

No incidents reported.

Jul 31, 2024
Resolved - Beginning at 8:38 PM UTC on July 31, 2024 and lasting until 9:28 PM UTC on July 31, 2024, some customers of github.com saw errors when navigating to the Billing pages and/or when updating their payment method. These errors were caused due to a degradation in one of our partner services. A fix was deployed by the partner services and the Billing pages are back to being functional.

For improved detection of such issues in future, we will work with the partner service to identify levers we can use to get an earlier indication of issues.

Jul 31, 21:21 UTC
Update - Our partner service outage has been resolved, and our service has recovered.
Jul 31, 21:20 UTC
Update - We have identified the issue is caused by a service outage to a partner, and we are working with the partner to resolve the incident.
Jul 31, 21:00 UTC
Update - We are investigating reports of users seeing some errors in Billing functionality and Billing pages.
Jul 31, 20:45 UTC
Investigating - We are currently investigating this issue.
Jul 31, 20:38 UTC
Resolved - On July 31, 2024, between 07:05 UTC and 09:01 UTC the Actions service experienced degradation, preventing it from processing API requests and executing jobs, in particular Pages builds. On average, 2% of jobs run during the incident window were affected. This was due to some nodes in one of our partner services experiencing connectivity issues in the East US2 region. We mitigated the incident by failing over the impacted service and re-routing the service’s traffic out of that region.

We are working to improve monitoring and processes of failover to reduce our time to detection and mitigation of issues like this one in the future.

Jul 31, 09:20 UTC
Update - Actions is operating normally.
Jul 31, 09:20 UTC
Update - We are continuing to see improvements in queuing and running Actions jobs and are monitoring for full recovery.
Jul 31, 09:13 UTC
Update - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery.
Jul 31, 08:28 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Jul 31, 08:07 UTC
Update - We are investigating reports of degraded performance in some Redis clusters.
Jul 31, 08:02 UTC
Investigating - We are currently investigating this issue.
Jul 31, 07:59 UTC
Resolved - This incident has been resolved.
Jul 31, 03:37 UTC
Update - The team is currently investigating a fix for issues with Codespaces. Impact continues to be limited to non-web clients. Customers receiving errors on desktop clients are encouraged to use the web client as a temporary workaround.
Jul 31, 02:55 UTC
Update - We continue investigating issues with Codespaces in multiple regions. Impact is limited to non-web clients. Customers receiving errors on desktop clients are encouraged to use the web client as a temporary workaround.
Jul 31, 01:49 UTC
Update - We are investigating issues with Codespaces in multiple regions. Some users may not be able to connect to their Codespaces at this time. We will update you on mitigation progress.
Jul 31, 01:23 UTC
Update - We are investigating reports of degraded performance for Codespaces.
Jul 31, 00:53 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Jul 31, 00:52 UTC
Jul 30, 2024
Resolved - On July 30th, 2024, between 13:25 UTC and 18:15 UTC, customers using Larger Hosted Runners may have experienced extended queue times for jobs that depended on a Runner with VNet Injection enabled in a virtual network within the East US 2 region. Runners without VNet Injection or those with VNet Injection in other regions were not affected. The issue was caused due to an outage in a third party provider blocking a large percentage of VM allocations in the East US 2 region. Once the underlying issue with the third party provider was resolved, job queue times went back to normal. We are exploring the addition of support for customers to define VNet Injection Runners with VNets across multiple regions to minimize the impact of outages in a single region.
Jul 30, 22:10 UTC
Update - The mitigation for larger hosted runners has continued to be stable and all job delays are less than 5 minutes. We will be resolving this incident.
Jul 30, 22:09 UTC
Update - We are continuing to hold this incident open while the team ensures that mitigation put in place is stable.
Jul 30, 21:44 UTC
Update - Larger hosted runners job starts are stable and starting within expected timeframes. We are monitoring job start times in preparation to resolve this incident. No enqueued larger hosted runner jobs were dropped during this incident.
Jul 30, 21:00 UTC
Update - Over the past 30 minutes, all larger hosted runner jobs have started in less than 5 minutes. We are continuing to investigate delays in larger hosted runner job starts
Jul 30, 20:17 UTC
Update - We are still investigating delays in customer’s larger hosted runner job starts. Nearly all jobs are starting under 5 minutes. Only 1 customer larger hosted runner job was delayed by more than 5 minutes in the past 30 minutes.
Jul 30, 19:40 UTC
Update - We are seeing improvements to the job start times for larger hosted runners for customers. In the last 30 minutes no customer jobs are delayed more than 5 minutes. We will continue monitoring for full recovery.
Jul 30, 19:04 UTC
Update - We are seeing run delays for larger hosted runners for a limited number of customers. We are deploying mitigations to address these delays.
Jul 30, 18:23 UTC
Investigating - We are currently investigating this issue.
Jul 30, 18:19 UTC
Resolved - This incident has been resolved.
Jul 30, 14:22 UTC
Update - We are starting to see recovery for this issue and are monitoring things closely. We will keep this incident open for now until we are fully confident on complete recovery.
Jul 30, 14:16 UTC
Update - We have correlated the impact on Codespaces to an outage with a third party service. We are continuing to investigate ways to reduce impact on our customers while we wait for that outage to be resolved.
Jul 30, 14:09 UTC
Update - We are seeing increased failure rates for creation and resumption of Codespaces in the UK South and West Europe regions.

We are working to resolve this issue and will update again soon.

Jul 30, 13:47 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Jul 30, 13:36 UTC
Jul 29, 2024

No incidents reported.

Jul 28, 2024

No incidents reported.

Jul 27, 2024

No incidents reported.

Jul 26, 2024

No incidents reported.

Jul 25, 2024
Resolved - This incident has been resolved.
Jul 25, 21:05 UTC
Investigating - We are currently investigating this issue.
Jul 25, 21:04 UTC
Resolved - On July 25th, 2024, between 15:30 and 19:10 UTC, the Audit Log service experienced degraded write performance. During this period, Audit Log reads remained unaffected, but customers would have encountered delays in the availability of their current audit log data. There was no data loss as a result of this incident.

The issue was isolated to a single partition within the Audit Log datastore. Upon restarting the primary partition, we observed an immediate recovery and a subsequent increase in successful writes. The backlog of log messages was fully processed by approximately 00:40 UTC on July 26th.

We are working with our datastore team to ensure mitigation is in place to prevent future impact. Additionally, we will investigate whether there are any actions we can take on our end to reduce the impact and time to mitigate in the future.

Jul 25, 19:20 UTC
Update - We have applied a fix and are seeing recovery. (Point of clarification: Impact was constrained to Audit Log Events, not all categories of events.)
Jul 25, 19:16 UTC
Investigating - We are currently investigating this issue.
Jul 25, 18:44 UTC
Jul 24, 2024

No incidents reported.

Jul 23, 2024
Resolved - This incident has been resolved.
Jul 23, 22:38 UTC
Update - We have mitigated the issue with Copilot Chat returning failures in some regions. Functionality has recovered for all Copilot Chat users.

Jul 23, 22:25 UTC
Update - We are seeing failures for the Copilot Chat for users in some regions. We are seeing about 20% of Copilot Chat requests fail.
Jul 23, 21:52 UTC
Investigating - We are currently investigating this issue.
Jul 23, 21:40 UTC
Jul 22, 2024

No incidents reported.

Jul 21, 2024

No incidents reported.

Jul 20, 2024

No incidents reported.