GitHub header
All Systems Operational
Git Operations ? Operational
API Requests ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Jul 12, 2024

No incidents reported today.

Jul 11, 2024
Resolved - This incident has been resolved.
Jul 11, 15:21 UTC
Update - Copilot is operating normally.
Jul 11, 15:21 UTC
Update - We have mitigated the intermittent timeout errors impacting Copilot’s Chat functionality and expect the incident to be resolved shortly.
Jul 11, 15:19 UTC
Update - We continue to investigate the cause of intermittent timeouts impacting Copilot’s Chat functionality. This is impacting a small fraction of customers. The timeout errors we are seeing has reduced back to healthy levels for the last 60 minutes but we are monitoring closely.
Jul 11, 15:04 UTC
Update - We continue to investigate the cause of intermittent timeouts impacting Copilot’s Chat functionality. This is impacting a small fraction of customers. We will provide further updates as we continue resolving the issue.
Jul 11, 14:14 UTC
Update - We continue to investigate the cause of intermittent timeouts impacting Copilot’s Chat functionality. This is impacting a small fraction of customers. We will provide further updates as we continue resolving the issue.
Jul 11, 13:32 UTC
Update - Copilot's Chat functionality is experiencing intermittent timeouts, we are investigating the issue.
Jul 11, 13:02 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Jul 11, 13:02 UTC
Jul 10, 2024

No incidents reported.

Jul 9, 2024

No incidents reported.

Jul 8, 2024
Resolved - On July 8th, 2024, between 18:18 UTC and 19:11 UTC, various services relying on static assets were degraded, including user uploaded content on github.com, access to docs.github.com and Pages sites, and downloads of Release assets and Packages.

The outage primarily affected users in the vicinity of New York City, USA, due to a local CDN disruption.

Service was restored without our intervention.

We are working to improve our external monitoring, which failed to detect the issue and will be evaluating a backup mechanism to keep critical services available, such as being able to load assets on GitHub.com, in the event of an outage with our CDN.

Jul 8, 19:45 UTC
Update - Issues and Pages are operating normally.
Jul 8, 19:44 UTC
Update - Issues and Pages are experiencing degraded performance. We are continuing to investigate.
Jul 8, 19:44 UTC
Update - Issues is operating normally.
Jul 8, 19:44 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Jul 8, 19:44 UTC
Update - Pages and Issues are operating normally.
Jul 8, 19:44 UTC
Update - Our assets are serving normally again and all impact is resolved.
Jul 8, 19:44 UTC
Update - We are beginning to see recovery of our assets and are monitoring for additional impact.
Jul 8, 19:16 UTC
Update - githubstatus.com may not be available or may be degraded for some users in some regions.
Jul 8, 19:12 UTC
Update - We are investigating issues with loading assets, including JavaScript assets, on various parts of the site for some users.
Jul 8, 19:02 UTC
Investigating - We are investigating reports of degraded performance for Issues and Pages
Jul 8, 19:01 UTC
Jul 7, 2024

No incidents reported.

Jul 6, 2024

No incidents reported.

Jul 5, 2024
Resolved - On July 5, 2024, between 16:31 UTC and 18:08 UTC, the Webhooks service was degraded, with customer impact of delays to all webhook delivery. On average, delivery delays were 24 minutes, with a maximum of 71 minutes. This was caused by a configuration change to the Webhooks service, which led to unauthenticated requests sent to the background job cluster. The configuration error was repaired and re-deploying the service solved the issue. However, this created a thundering herd effect which overloaded the background job queue cluster which put its API layer at max capacity, resulting in timeouts for other job clients, which presented as increased latency for API calls.

Shortly after resolving the authentication misconfiguration, we had a separate issue in the background job processing service where health probes were failing, leading to reduced capacity in the background job API layer which magnified the effects of the thundering herd. From 18:21 UTC to 21:14 UTC, Actions runs on PRs experienced approximately 2 minutes delay and maximum of 12 minutes delay. A deployment of the background job processing service remediated the issue.

To reduce our time to detection, we have streamlined our dashboards and added alerting for this specific runtime behavior. Additionally, we are working to reduce the blast radius of background job incidents through better workload isolation.

Jul 5, 20:57 UTC
Update - We are seeing recovery in Actions start times and are observing for any further impact.
Jul 5, 20:44 UTC
Update - We are still seeing about 5% of Actions runs taking longer than 5 minutes to start. We are scaling and shifting resources to encourage recovery of the problem.
Jul 5, 20:32 UTC
Update - We are still seeing about 5% of Actions runs taking longer than 5 minutes to start. We are evaluating mitigations to increase capacity to decrease latency.
Jul 5, 19:58 UTC
Update - We are seeing about 5% of Actions runs not starting within 5 minutes. We are continuing investigation.
Jul 5, 19:19 UTC
Update - We have seen recovery of Actions run delays. Keeping the incident open to monitor for full recovery.
Jul 5, 18:40 UTC
Update - Webhooks is operating normally.
Jul 5, 18:10 UTC
Update - We are seeing delays in Actions runs due to the recovery with webhook deliveries. We expect this to resolve with the recovery of webhooks.
Jul 5, 18:09 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Jul 5, 18:07 UTC
Update - We are seeing recovery as webhooks are being delivered again. We are burning down our queue of events. No events have been lost. New webhook deliveries will be delayed while this process recovers.
Jul 5, 17:57 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Jul 5, 17:55 UTC
Update - We are reverting a configuration change that is suspected to contribute to the problem with webhook deliveries.
Jul 5, 17:42 UTC
Update - Our telemetry shows that most webhooks are failing to be delivered. We are queueing all undelivered webhooks and are working to remediate the problem.
Jul 5, 17:20 UTC
Update - Webhooks is experiencing degraded availability. We are continuing to investigate.
Jul 5, 17:17 UTC
Investigating - We are investigating reports of degraded performance for Webhooks
Jul 5, 17:04 UTC
Jul 4, 2024

No incidents reported.

Jul 3, 2024
Resolved - On July 3, 2024, between 1:34 PM UTC and 4:42 PM UTC the GitHub documentation was degraded and showed a 500 on non-cached pages. On average, the error rate was 2-5% and peaked at 5% of requests to the service. This was due to an observability misconfiguration. We mitigated the incident by updating the observability configuration and redeploying. We are working to reduce our time to detection and mitigation of issues like this one in the future.
Jul 3, 16:40 UTC
Update - Mitigation measures have been rolled out and we're seeing errors disappear in our telemetry. We'll continue to monitor our services closely to ensure the docs site is fully healthy.
Jul 3, 16:37 UTC
Update - We have identified a likely cause of the errors with GitHub Docs and are working on a mitigation.
Jul 3, 15:59 UTC
Investigating - We are currently investigating this issue.
Jul 3, 15:24 UTC
Jul 2, 2024
Resolved - This incident has been resolved.
Jul 2, 19:24 UTC
Update - API Requests is operating normally.
Jul 2, 19:24 UTC
Update - The fix has been rolled out and our telemetry indicates that the errors with code search have resolved.
Jul 2, 19:22 UTC
Update - An issue with faulty data in an in-memory storage causes around one third of code search requests to fail. The team has identified the issue and is working on rolling out a fix.
Jul 2, 19:18 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
Jul 2, 18:47 UTC
Investigating - We are currently investigating this issue.
Jul 2, 18:45 UTC
Resolved - This incident has been resolved.
Jul 2, 01:14 UTC
Update - As a result of rerouting traffic, we have seen overall network link health return to normal.
Jul 2, 01:14 UTC
Update - We are investigating connection issues with one of our network links. We are working to reroute traffic.
Jul 2, 00:52 UTC
Update - We are investigating intermittent network connection issues. These issues appear to be limited to customers hosted on AWS that are connecting to GitHub's network.
Jul 2, 00:15 UTC
Update - We're investigating reports of intermittent timeouts and connection errors for git clone operations.
Jul 1, 23:32 UTC
Investigating - We are investigating reports of degraded performance for Git Operations
Jul 1, 22:59 UTC
Jul 1, 2024
Jun 30, 2024

No incidents reported.

Jun 29, 2024

No incidents reported.

Jun 28, 2024
Resolved - On June 28th, 2024, at 16:06 UTC, a backend update by GitHub triggered a significant number of long-running Organization membership update jobs in our job processing system. The job queue depth rose as these update jobs consumed most of our job worker capacity. This resulted in delays for other jobs across services such as Pull Requests and PR-related Actions workflows. We mitigated the impact to Pull Requests and Actions at 19:32 UTC by pausing all Organization membership update jobs. We deployed a code change at 22:30 UTC to skip over the jobs queued by the backend change and re-enabled Organization membership update jobs. We restored the Organization membership update functionality at 22:52 UTC, including all membership changes queued during the incident.

During the incident, about 15% of Action workflow runs experienced a delay of more than five minutes. In addition, Pull Requests had delays in determining merge eligibility and starting associated Action workflows for the duration of the incident. Organization membership updates saw delays for upwards of five hours.

To prevent a similar event in the future from impacting our users, we are working to: improve our job management system to better manage our job worker capacity; add more precise monitoring for job delays; and strengthen our testing practices to prevent future recurrences.

Jun 28, 22:51 UTC
Update - We are continuing to work to mitigate delays in organization membership changes.
Jun 28, 22:18 UTC
Update - We are still actively working to mitigate delays in organization membership changes.
Jun 28, 21:45 UTC
Update - We are actively working to mitigate delays in organization membership changes. Actions and Pull Requests are both functioning normally now.
Jun 28, 20:46 UTC
Update - Actions is operating normally.
Jun 28, 20:00 UTC
Update - Pull Requests is operating normally.
Jun 28, 19:59 UTC
Update - We are continuing to apply mitigations and are seeing improvement in creating pull request merge commits and Actions runs for pull request events. Applying changes to organization members remains delayed.
Jun 28, 19:51 UTC
Update - We are continuing to work on mitigating delays creating pull request merge commits, Actions runs for pull request events, and changes to organization members.
Jun 28, 19:03 UTC
Update - Actions runs triggered by pull requests are experiencing start delays. We have engaged the appropriate teams and are investigating the issue.
Jun 28, 17:59 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Jun 28, 17:58 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jun 28, 17:34 UTC