All Systems Operational

Astro Hosted Operational
90 days ago
99.96 % uptime
Today
Scheduling and Running DAGs and Tasks Operational
90 days ago
99.94 % uptime
Today
Deployment Access Operational
90 days ago
99.98 % uptime
Today
Deployment Management Operational
90 days ago
99.84 % uptime
Today
Cloud UI Operational
90 days ago
99.99 % uptime
Today
Cloud API Operational
90 days ago
99.99 % uptime
Today
Cloud Image Repository Operational
90 days ago
100.0 % uptime
Today
Dashboards and Analytics Operational
90 days ago
100.0 % uptime
Today
Astro Hybrid Operational
90 days ago
99.96 % uptime
Today
Scheduling and Running DAGs and Tasks Operational
90 days ago
99.96 % uptime
Today
Deployment Access Operational
Deployment Management Operational
Cloud UI Operational
Cloud API Operational
Cloud Image Repository Operational
Cluster Management Operational
Astro Observe Operational
90 days ago
99.9 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Mar 9, 2026
Resolved - The GKE incident is now resolved per Google, and we are seeing that all related issues appear to have cleared up on Astro. If you are still experiencing an issue, please raise a support ticket.
Mar 9, 16:11 UTC
Identified - We are seeing signs of recovery across some affected clusters.
Mar 9, 15:30 UTC
Update - Google has confirmed to Astronomer that this is an issue with GKE in the region. We are following the issue closely.
Mar 9, 13:53 UTC
Investigating - Some pods in us-central-1 are experiencing CrashLoopBackoff errors. Because at least some of these errors are affecting the components that control Dag-only deploys, newly scaled up worker pods are being affected in at least some cases. We are actively investigating.
Mar 9, 13:29 UTC
Mar 8, 2026

No incidents reported.

Mar 7, 2026

No incidents reported.

Mar 6, 2026

No incidents reported.

Mar 5, 2026
Resolved - This incident has been resolved.
Mar 5, 23:05 UTC
Monitoring - Azure confirmed that the fix has been successfully deployed to East US 2 region. We are monitoring the results.
Mar 5, 16:49 UTC
Identified - We have identified the root cause of the autoscaling issue affecting certain deployments on the Shared Azure Cluster in the US-East2 region. The issue is related to underlying infrastructure behaviour within the cloud provider environment.

Azure has acknowledged the issue and is currently rolling out a hotfix across regions. We are actively monitoring the rollout and will provide further updates as they become available.

Mar 4, 14:31 UTC
Investigating - We are currently investigating an issue affecting Airflow worker autoscaling for some deployments hosted on the Shared Azure Cluster in the US-East2 region.

As a result of this issue, worker pods may not scale up as expected in response to workload demand. This can lead to tasks remaining in a queued state longer than usual and, in some cases, failing due to queued timeouts.

Our engineering team is actively working to identify the root cause and restore normal autoscaling behaviour. We will provide further updates as more information becomes available.

Impact:
• Affected deployments may experience delayed task execution.
• Worker pods may not scale up from zero or may not scale as expected under load.

We will continue to share updates on this page as we make progress toward resolution.

Mar 4, 14:10 UTC
Mar 4, 2026
Mar 3, 2026

No incidents reported.

Mar 2, 2026

No incidents reported.

Mar 1, 2026

No incidents reported.

Feb 28, 2026
Resolved - Astro Observe is fully operational!

Incident is now resolved.

Feb 28, 21:32 UTC
Identified - Maintenance completed.

We are currently looking into bringing Astro Observe up to operational.

Feb 28, 19:36 UTC
Investigating - We are still in the process of performing critical maintenance on the Astro control plane. Due to upstream dependencies by our cloud providers our changes have taken longer than anticipated. We are actively working to resolve this as quickly as possible and expect to conclude by 20:00 UTC.

As a reminder: Your DAGs and tasks will continue to run normally but you may not be able to view them in the UI during the maintenance. DAGs that use the Astro API (such as cross-deployment DAG triggering) will be impacted until this work is complete.

Feb 28, 19:10 UTC
Completed - The scheduled maintenance has been completed.
Feb 28, 19:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 28, 16:00 UTC
Scheduled - The Astro Control plane will be undergoing maintenance that will require downtime of the Astro UI and Astro API. DAGs and tasks of existing deployments will continue to run normally even if they are not viewable via UI, unless the task itself uses the API as part of its operation (such as in cases where a task triggers a DAG in a different Astro deployment).
Feb 14, 03:15 UTC
Feb 27, 2026

No incidents reported.

Feb 26, 2026

No incidents reported.

Feb 25, 2026

No incidents reported.

Feb 24, 2026
Resolved - This incident has been resolved.
Feb 24, 22:49 UTC
Monitoring - 13.5.0 has been officially yanked from the registry. 13.5.1 will follow but for now stay on 13.4.0
Feb 24, 22:27 UTC
Investigating - We are currently investigating this issue.
Feb 24, 21:51 UTC
Feb 23, 2026

No incidents reported.