We are aware some clients are still having connectivity issues. To rectify any remaining issues we are working on adding a connection to Cogent who is On-Net within our datacenter. They are currently not part of  our bandwidth providers.

Abovenet has updated us they are trying to reroute the cut fiber using an alternate path because they are unable to gain access to the fiber for repair due to fire.

We do not have any ETA/ETR estimates at this time. As soon as we have more information we will send another update.

The network issue is a bend in one of our fibers to NYC, provided by Abovenet. Techs have identified where the problem is, have prepped for the work in the manhole they need to be in.

The fiber cut is being workedon by Abovenet. They are saying that the c rew is in place to replace the span of fiber that is damaged.

They are not able to give us an estimated time of repair yet.

There seems to be an issue affecting some routes to the USA premium network, affecting the following servers:

usa1-pn, usa2-pn, usa1-adj, usa2-adj, usa3-adj and usa4-adj

I have a ticket open with the datacentre and will update this post as I get info.

UPDATE: This seems to have been resolved and seems to have been a temporary issue with a supplier outside of the network.

This only seems to be affecting usa1-pn and usa2-pn but I’ve listed this post under other servers that could be affected at all.

There seems to be packet loss on the network, I have an emergency ticket open with the datacentre and hope to get this resolved asap – in fact as I’m writing this things seem to be improving already.

UPDATE (19:35): I can confirm that as I was finishing the post, buffering completely stopped. I’ve also had word from the network admin confirming that they were working on blocking a flood which lasted for about 5-10 minutes.

There have been some issues on the usa prem network lately, until today the engineers there suggested that it was a problem with the servers but I’ve managed to get evidence together that shows that it’s the network.

They are currently working to find the cause of this intermittent buffering that we’ve experienced over the last couple of weeks – it only seems to happen for a few minutes at a time, but I’ve told them how irritating this is for people that use the network for streaming.

I expect this to be resolved soon, and if not then customers will be relocated.

Sorry for any inconvenience caused.

I’m not 100% certain but there may be an issue with the USA premium network, a customer has reported issues with AutoDJ and I’ve heard a little buffering on another stream. Thre’s no packet loss and there isn’t any lag but something is amiss.

I’ll contact the network people and keep you updated here.

UPDATE: I have a ticket open with the  datacentre, who will look in to this shortly. Please don’t open a support ticket, as it may not be answered right away if it is regarding this issue.

According to Radiotoolbox stream test, the streams are perfect http://www.radiotoolbox.com/images/sbin/stream_test_graph.png?id=13119 — this does suggest that only certain routes are affected.

UPDATE3 22:09: Appears to be fixed

The issue seems to have cleared up at the moment, I’ll post here with any info as I get it.

UPDATE4 22:19:

The issue was with one of the carriers, XO – it seems to have calmed down now and they are looking in to the problem. The problem was outside the control of us or the datacentre as it was on an international link outside the network.

We are currently experiencing problems on this network, this is being looked into.

UPDATE: All servers are back online now, downtime latest about 20 minutes. If your stream is still offline then please try restarting it and open a support ticket if that doesn’t work.

UPDATE2: Here’s a message from the provider that was affected.

“There was about 15 minutes of network downtime for servers in our Teb2 location. We currently have two datacenters located in the same building (Teb1 and Teb2). In our network today a provider dropped and caused a BGP flag. Our Teb1 datacenter had no issues re-building routes from this. However the routers in Teb2 did not handle this as quickly as Teb1. We will be adding in another router in Teb2 to ensure something like this can not happen again.”