Our engineering team along with our vendors have determined that the latency being seen by several datastores was the cause of fiber-channel aborts. These aborts were chased down to a specific path from the compute services to the storage switching infrastructure. We have since removed this path and ensured all other paths are operating as expected.
Oct 20, 14:58 EDT
Maintenance has been successfully implemented and we are monitoring the results.
Oct 20, 00:17 EDT
Performance remains within acceptable limits and we have narrowed down the offending resources. We will be scheduling maintenance later this evening, during the maintenance window performance may be affected.
Oct 19, 20:05 EDT
We are seeing symptoms diminish and performance is improving. We continue to work with the vendor on a resolution.
Oct 19, 18:17 EDT
We continue to work with our vendor to address latency and will provide updates as they become available.
Oct 19, 16:02 EDT
We are actively working with a vendor on a latency issue which is leading to desktop slowness and disconnects. We will continue to provide updates as they become available.
Oct 19, 15:36 EDT
We are currently investigating this issue.
Oct 19, 15:23 EDT