All Systems Operational

Astro Hosted Operational
90 days ago
99.94 % uptime
Today
Scheduling and Running DAGs and Tasks Operational
90 days ago
99.96 % uptime
Today
Deployment Access Operational
90 days ago
99.94 % uptime
Today
Deployment Management Operational
90 days ago
99.83 % uptime
Today
Cloud UI Operational
90 days ago
99.99 % uptime
Today
Cloud API Operational
90 days ago
99.99 % uptime
Today
Cloud Image Repository Operational
90 days ago
100.0 % uptime
Today
Dashboards and Analytics Operational
90 days ago
99.87 % uptime
Today
Astro Hybrid Operational
90 days ago
99.96 % uptime
Today
Scheduling and Running DAGs and Tasks Operational
90 days ago
99.96 % uptime
Today
Deployment Access Operational
Deployment Management Operational
Cloud UI Operational
Cloud API Operational
Cloud Image Repository Operational
Cluster Management Operational
Astro Observe Operational
90 days ago
99.96 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
May 17, 2026

No incidents reported today.

May 16, 2026

No incidents reported.

May 15, 2026

No incidents reported.

May 14, 2026

No incidents reported.

May 13, 2026

No incidents reported.

May 12, 2026

No incidents reported.

May 11, 2026

No incidents reported.

May 10, 2026

No incidents reported.

May 9, 2026
Resolved - AWS has confirmed recovery in the affected Availability Zone (use1-az4), and system performance has returned to normal levels.

We are no longer observing elevated task failures or latency in the AWS us-east-1 region. Any previously impacted workloads should now be operating as expected.

If you continue to experience issues, please reach out to Astronomer Support for assistance.

May 9, 01:43 UTC
Monitoring - We are continuing to monitor the situation as AWS works toward full recovery in the affected Availability Zone. At this time, we are seeing signs of stabilization, and most transient failures should continue to self-resolve.

We will provide further updates once AWS confirms that the issue has been fully resolved.

May 8, 07:45 UTC
Investigating - We are currently observing elevated task failures and latency for some deployments running in the AWS us-east-1 region. This is related to an ongoing AWS incident affecting a single Availability Zone (use1-az4), where EC2 and EBS resources have experienced impairments.

What to expect:
You may see intermittent task failures or retries in your deployments. In most cases, these failures are transient and should self-resolve automatically as AWS continues recovery.

What you should do:
No immediate action is required. However, if you notice consistent or prolonged failures, please reach out to Astronomer Support, and we’ll help investigate further.

We are continuing to monitor the situation closely and will share updates as needed.

Refer: https://health.aws.amazon.com/health/status

May 8, 07:43 UTC
May 8, 2026
May 7, 2026
Resolved - The incident has been resolved.
May 7, 14:24 UTC
Monitoring - A fix has been implemented, and we are monitoring the results.
May 7, 13:49 UTC
Update - A fix has been prepared and is moving through deployment. We will provide another update once the rollout is complete and validation is underway.
May 7, 12:57 UTC
Identified - We have identified the cause of the delayed delivery for some time-based alerts and are implementing a fix. We will provide another update once the fix has been deployed and validated.
May 7, 11:15 UTC
Investigating - We are investigating delayed delivery for some time-based alerts. The delay is primarily visible for DAG Timeliness alerts, and may also affect Observe-related SLA, Proactive SLA, and Data Quality monitor alerts. DAG Duration and Task Duration alerts do not appear to be affected at this time.
May 7, 10:31 UTC
May 6, 2026
Resolved - Our team has determined this only affects internal access, Astro end users are unaffected.
May 6, 20:05 UTC
Investigating - Attempting to access the dags page in the Airflow UI results in a 403 Forbidden error. This should not be affecting task execution.
May 6, 19:56 UTC
May 5, 2026

No incidents reported.

May 4, 2026
Resolved - This incident is now resolved, and all Cost Breakdown data is now up to date. The issue that caused the delay has been fixed and should not recur.
May 4, 20:15 UTC
Update - We are continuing to investigate this issue.
May 4, 10:56 UTC
Investigating - We’re investigating an issue affecting Dashboard cost breakdown data. For affected customers, cost breakdown information may appear stale and may not have updated since May 1, 2026. Our team is actively investigating the cause and working to restore current data. We’ll share another update as we have more information.
May 4, 10:56 UTC
May 3, 2026

No incidents reported.