Photo via labsji/Flickr
Advertisement
According to Amazon updates during the glitch, the problem was with the Elastic Block Storage volumes in its ECS database, which were experiencing launch and network packet loss errors. There was also a glitch with the load balancers, but on the whole the problems were solved over the next four to five hours. Or as an AWS spokesperson said in an email:Yesterday, from 12:51 PDT to 1:42 PM PDT, we experienced network packet loss, which caused a small number of EBS volumes in a single Availability Zone ("AZ") in US-East-1 to experience degraded performance, and a small number of EC2 instances to become unreachable in that same single AZ. The root cause was a "gray" (partial) failure with a networking device that caused a portion of the AZ to experience packet loss. The network issue was resolved and most volumes and instances returned to normal.Add @KJ_Online @OnlineSentinel and @PressHerald to the victims of the #AWS outage. Sites up but can't add or edit content. #techwoes
— Doug Vanderweide (@dougvdotcom) August 25, 2013
Advertisement