|
Amazon CloudFront |
Service is operating normally. |
|
|
Amazon CloudWatch (N. California) |
Service is operating normally. |
|
|
Amazon CloudWatch (N. Virginia) |
Delayed CloudWatch metrics
more
2:26 AM PDT We are working on restoring connectivity to a small number of EC2, EBS, and RDS resources in multiple availability zones in the US-EAST-1 region. While we restore connectivity, CloudWatch metrics for those resources will be delayed. 3:04 AM PDT We are continuing to see connectivity issues impacting EC2, EBS, and RDS resources in multiple availability zones in the US-EAST-1 region. While we restore connectivity, CloudWatch metrics for those resources will be delayed. We continue to work towards resolution. 4:47 AM PDT CloudWatch metrics are delayed for some EBS and RDS resources in the US-EAST-1 region. The delays began at 12:55AM PDT. We have isolated the impact to a single availability zone, and are working towards a full resolution. 6:31 AM PDT We continue to work to restore CloudWatch metrics for the affected EBS and RDS resources in a single availability zone. 2:39 PM PDT We are continuing to see delays to CloudWatch metrics for EBS and RDS resources in a single availability zone in the US-EAST-1 region. We are working towards a resolution. Please see the Amazon Elastic Compute Cloud (N. Virginia) and Amazon Relational Database Service (N. Virginia) status for more details.
|
|
|
Amazon Elastic Compute Cloud (N. California) |
Service is operating normally. |
|
|
Amazon Elastic Compute Cloud (N. Virginia) |
Instance connectivity, latency and error rates.
more
1:41 AM PDT We are currently investigating latency and error rates with EBS volumes and connectivity issues reaching EC2 instances in the US-EAST-1 region. 2:18 AM PDT We can confirm connectivity errors impacting EC2 instances and increased latencies impacting EBS volumes in multiple availability zones in the US-EAST-1 region. Increased error rates are affecting EBS CreateVolume API calls. We continue to work towards resolution. 2:49 AM PDT We are continuing to see connectivity errors impacting EC2 instances, increased latencies impacting EBS volumes in multiple availability zones in the US-EAST-1 region, and increased error rates affecting EBS CreateVolume API calls. We are also experiencing delayed launches for EBS backed EC2 instances in affected availability zones in the US-EAST-1 region. We continue to work towards resolution. 3:20 AM PDT Delayed EC2 instance launches and EBS API error rates are recovering. We're continuing to work towards full resolution. 4:09 AM PDT EBS volume latency and API errors have recovered in one of the two impacted Availability Zones in US-EAST-1. We are continuing to work to resolve the issues in the second impacted Availability Zone. The errors, which started at 12:55AM PDT, began recovering at 2:55am PDT 5:02 AM PDT Latency has recovered for a portion of the impacted EBS volumes. We are continuing to work to resolve the remaining issues with EBS volume latency and error rates in a single Availability Zone. 6:09 AM PDT EBS API errors and volume latencies in the affected availability zone remain. We are continuing to work towards resolution. 6:59 AM PDT There has been a moderate increase in error rates for CreateVolume. This may impact the launch of new EBS-backed EC2 instances in multiple availability zones in the US-EAST-1 region. Launches of instance store AMIs are currently unaffected. We are continuing to work on resolving this issue. 7:40 AM PDT In addition to the EBS volume latencies, EBS-backed instances in the US-EAST-1 region are failing at a high rate. This is due to a high error rate for creating new volumes in this region. 8:54 AM PDT We'd like to provide additional color on what were working on right now (please note that we always know more and understand issues better after we fully recover and dive deep into the post mortem). A networking event early this morning triggered a large amount of re-mirroring of EBS volumes in US-EAST-1. This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes. Additionally, one of our internal control planes for EBS has become inundated such that it's difficult to create new EBS volumes and EBS backed instances. We are working as quickly as possible to add capacity to that one Availability Zone to speed up the re-mirroring, and working to restore the control plane issue. We're starting to see progress on these efforts, but are not there yet. We will continue to provide updates when we have them. 10:26 AM PDT We have made significant progress in stabilizing the affected EBS control plane service. EC2 API calls that do not involve EBS resources in the affected Availability Zone are now seeing significantly reduced failures and latency and are continuing to recover. We have also brought additional capacity online in the affected Availability Zone and stuck EBS volumes (those that were being remirrored) are beginning to recover. We cannot yet estimate when these volumes will be completely recovered, but we will provide an estimate as soon as we have sufficient data to estimate the recovery. We have all available resources working to restore full service functionality as soon as possible. We will continue to provide updates when we have them. 11:09 AM PDT A number of people have asked us for an ETA on when we'll be fully recovered. We deeply understand why this is important and promise to share this information as soon as we have an estimate that we believe is close to accurate. Our high-level ballpark right now is that the ETA is a few hours. We can assure you that all-hands are on deck to recover as quickly as possible. We will update the community as we have more information. 12:30 PM PDT We have observed successful new launches of EBS backed instances for the past 15 minutes in all but one of the availability zones in the US-EAST-1 Region. The team is continuing to work to recover the unavailable EBS volumes as quickly as possible. 1:48 PM PDT A single Availability Zone in the US-EAST-1 Region continues to experience problems launching EBS backed instances or creating volumes. All other Availability Zones are operating normally. Customers with snapshots of their affected volumes can re-launch their volumes and instances in another zone. We recommend customers do not target a specific Availability Zone when launching instances. We have updated our service to avoid placing any instances in the impaired zone for untargeted requests.
|
|
|
Amazon Elastic MapReduce (N. California) |
Service is operating normally. |
|
|
Amazon Elastic MapReduce (N. Virginia) |
Errors starting job flows.
more
7:45 AM PDT We are not able to start CC1 instances in the US-EAST-1 region. Existing job flows are operating normally. 3:12 PM PDT Customers can now start job flows with CC1 instances in the US-EAST-1 region by not targeting a specific Availability Zone.
|
|
|
Amazon Flexible Payments Service |
Service is operating normally. |
|
|
Amazon Mechanical Turk (Requester) |
Service is operating normally. |
|
|
Amazon Mechanical Turk (Worker) |
Service is operating normally. |
|
|
Amazon Relational Database Service (N. California) |
Service is operating normally. |
|
|
Amazon Relational Database Service (N. Virginia) |
Database instance connectivity and latency issues
more
1:48 AM PDT We are currently investigating connectivity and latency issues with RDS database instances in the US-EAST-1 region. 2:16 AM PDT We can confirm connectivity issues impacting RDS database instances across multiple availability zones in the US-EAST-1 region. 3:05 AM PDT We are continuing to see connectivity issues impacting some RDS database instances in multiple availability zones in the US-EAST-1 region. Some Multi AZ failovers are taking longer than expected. We continue to work towards resolution. 4:03 AM PDT We are making progress on failovers for Multi AZ instances and restore access to them. This event is also impacting RDS instance creation times in a single Availability Zone. We continue to work towards the resolution. 5:06 AM PDT IO latency issues have recovered in one of the two impacted Availability Zones in US-EAST-1. We continue to make progress on restoring access and resolving IO latency issues for remaining affected RDS database instances. 6:29 AM PDT We continue to work on restoring access to the affected Multi AZ instances and resolving the IO latency issues impacting RDS instances in the single availability zone. 8:12 AM PDT Despite the continued effort from the team to resolve the issue we have not made any meaningful progress for the affected database instances since the last update. Create and Restore requests for RDS database instances are not succeeding in US-EAST-1 region. 10:35 AM PDT We are making progress on restoring access and IO latencies for affected RDS instances. We recommend that you do not attempt to recover using Reboot or Restore database instance APIs or try to create a new user snapshot for your RDS instance - currently those requests are not being processed. 2:35 PM PDT We have restored access to the majority of RDS Multi AZ instances and continue to work on the remaining affected instances. A single Availability Zone in the US-EAST-1 region continues to experience problems for launching new RDS database instances. All other Availability Zones are operating normally. Customers with snapshots/backups of their instances in the affected Availability zone can restore them into another zone. We recommend that customers do not target a specific Availability Zone when creating or restoring new RDS database instances. We have updated our service to avoid placing any RDS instances in the impaired zone for untargeted requests.
|
|
|
Amazon Route 53 |
Service is operating normally. |
|
|
Amazon Simple Email Service (N. Virginia) |
Service is operating normally. |
|
|
Amazon Simple Notification Service (N. California) |
Service is operating normally. |
|
|
Amazon Simple Notification Service (N. Virginia) |
Service is operating normally. |
|
|
Amazon Simple Queue Service (N. California) |
Service is operating normally. |
|
|
Amazon Simple Queue Service (N. Virginia) |
Service is operating normally. |
|
|
Amazon Simple Storage Service (N. California) |
Service is operating normally. |
|
|
Amazon Simple Storage Service (US Standard) |
Service is operating normally. |
|
|
Amazon SimpleDB (N. California) |
Service is operating normally. |
|
|
Amazon SimpleDB (N. Virginia) |
Service is operating normally. |
|
|
Amazon Virtual Private Cloud (N. Virginia) |
Service is operating normally. |
|
|
AWS CloudFormation (N. California) |
Service is operating normally. |
|
|
AWS CloudFormation (N. Virginia) |
Delays creating and deleting stacks
more
|
|
|
AWS Elastic Beanstalk |
Increased error rates
more
3:16 AM PDT We can confirm increased error rates impacting Elastic Beanstalk APIs and console, and we continue to work towards resolution. 4:18 AM PDT We continue to see increased error rates impacting Elastic Beanstalk APIs and console, and we are working towards resolution. 6:54 AM PDT Our APIs and console have recovered. However, we are still experiencing issues launching new environments or new EC2 instances in existing environments. 2:16 PM PDT We have observed several successful launches of new and updated environments over the last hour. A single Availability Zone in US-EAST-1 is still experiencing problems. We recommend customers do not target a specific Availability Zone when launching instances. We have updated our service to avoid placing any instances in the impaired zone for untargeted requests.
|
|
|
AWS Import/Export |
Service is operating normally. |
|
|
AWS Management Console |
Service is operating normally. |
|