Tuesday 10th October 2017
We have been made aware of degraded service when customers are attempting to dial UK mobile phones. We are currently investigating this occurrence and will post more information as it becomes available. We have also been made aware of an issue with one of our carriers, which has consequently been taken out of routing to avoid any further customer impact. It is unknown yet as to whether this is linked to the mobile carrier issue. We apologise for any inconvenience this may be causing.
Following on from the issues reported with UK mobile telephony routing, multiple online sources and carriers have acknowledged that there is a national issue affecting routing. We will continue to check this and post updates where available. Thank you for your understanding.
After further testing, the vast majority of issues appear to have been resolved. We will continue to monitor update feeds from third-parties and post updates where available. Thank you once again for your patience during this time.
Tuesday 12th September 2017 – 17:40 – 18:17
Limited Outbound Issue:
At 17:40 on 12th September 2017, we detected degraded outbound capacity on a single node. An engineer was dispatched to investigate the node that was degraded and reinstated full routing capabilities by 18:17. While this issue affected a very minor number of calls and customers, we apologise for any inconvenience this may have caused.
Saturday 19th August 2017 – 10:18 – 10:54
Inbound Routing Issue:
We were made aware of an inbound routing issue that may have affected some customers during the time specified. This was due to misconfiguration during a service upgrade, therefore leading to a failure of our monitoring system and subsequently, causing the node in question to fail. The failure was quickly identified and we were able to mitigate the issue before 11AM. We apologise for any inconvenience this may have caused our customers during this time.
9th February 2017
Our upstream transit provider has informed us of upcoming maintenance due to be performed on its network. Although we do not anticipate any impact on our service, please be aware that the maintenance is scheduled to occur during the following hours:
Start time: 00:00 GMT 12.02.17
End time: 07:00 GMT 12.02.17
This work has now been completed.
26th October 2016
We are aware of an issue on the Nexbridge network with all inbound and outbound calls, we will update as soon as any information becomes available.
Our carriers experienced an outage lasting 10 minutes, the cause of which is still under investigation, but normal service has now been resumed.
12th October 2016
It has come to Nexbridge’s attention that Vodafone has introduced a method of blocking calls to their customers by restricting certain originating network CLIs, based upon some form of criteria known only to them, from a number of originating networks. We have no idea what that criteria is, however we have received a small number of reports from our customers/resellers that they are having difficulty calling their customers or prospects from our network when they call Vodafone number ranges (this includes previous Vodafone numbers that have been ported to other Mobile Network Operators (MNOs)).
Nexbridge has raised faults with BT and is currently liaising with Ofcom with regard to this practice, as we believe this removes consumer choice as to whether they wish to received a call or not, and is contrary to the Ofcom General Condition of Entitlement No.17 which in summary does not allow one Communications Provider (CP) to unduly discriminate against another CP with regard to adoption or use of telephone numbers.
The customers that have reported issues to date have opted-in data and utilise Nexbridge’s TPS Compliant service to screen calls before they are made, have compliant inbound messages and also have live agents/diallers making the calls, so to all intents and purposes, are as compliant from an ICO perspective as they can be. Obviously there would need to be checks made on silent and abandoned calls, however again, this is something that the regulator (Ofcom) must address directly with the call centre if reports have been received that indicate excessive silent/abandoned calls.
Nexbridge does not condone this action by Vodafone, and whilst we can understand that consumers can often be frustrated by unsolicited sales/marketing calls, this action can potentially prevent other genuine debt collection, survey, follow-up or indeed personal calls from being made from a site location to a Vodafone mobile number (or a number ported from a Vodafone range to another CP) where the network CLI has been barred.
Further updates will be provided as investigations continue.
21st July 2016
We are aware of a global issue that is currently affecting call quality for some customers. This is due to a problem with our carrier that is now being investigated. As soon as we have any further information from BT, we will provide an update.
This has now been resolved and we will update customers as we have more information.
20th July 2016
We are aware of some call quality issues this morning, which should now be resolved. We will follow up with details once we have received a report from our carriers explaining what happened.
2nd February 2016
We have been made aware of an issue with BT’s Core DNS service, whilst this does not impact the Nexbridge network, customers using BT Broadband may be experiencing connectivity issues.
11th January 2016
We are currently aware of an issue affecting outbound calling on the Nexbridge network. We are currently in conversations with upstream carriers and shall post updated information as soon as it becomes available.
These issues are not limited to the Nexbridge network and appear to be related to an issue with the UK Mobile Telephony Platform. We shall post any updates as soon as we receive them.
BT have acknowledged that they are aware of an issue and are currently investigating as the highest priority.
29th July 2015
We are currently aware of an issue affecting connections from some clients utilising the Metronet network and are working very closely with Metronet support to identify the cause of the issue.
Further to working with Metronet Support on earlier transit issues, Metronet have now confirmed that the issue has been resolved.
9th July 2015
We experienced a brief inbound-only issue between 17:55 and 17:59 on Wednesday 8th June, the root cause of this issue is being investigated and we shall post an update when more information becomes available.
11th June 2015
Earlier issues surrounding sporadic inbound PSTN call quality should now have subsided. Our upstream carriers have identified an issue and applied a fix. Should any further issues occur, please inform us via our contact page. We apologise for any inconvenience this may have caused.
12th March 2015
Nexbridge are aware of an inbound call delivery problem that occurred between 09:01 and 09:11 this morning. This affected certain inbound number ranges from the PSTN. Upon notification of this issue resilience plans were actioned and call flow resumed. This issue has been identified and was due to equipment failure. We apologise for any inconvenience caused.
26th February 2015
Due to collisions with inbound and outbound call traffic, we were made aware of some issues relating to call quality and one-way audio transmission. We have worked with BT on this issue and now believe it to be resolved. Over the coming months, we will be coordinating with BT Architects to ensure that likelihood of this issue reoccurring is reduced to an absolute minimum. We apologise for any inconvenience this may have caused.
9th January 2015
Due to a hardware failure our outbound service was interrupted for a few customers between 13:32 and 13:54. The issue was resolved via the implementation of a backup server.
28th June 2014
Nexbridge are aware of an issue with call delivery. This issue is within the BT core network and engineers are working to resolve the issue. This issue is causing internet issues across the BT network.
18th June 2014
We are aware of a connectivity issue that occurred between 12:59 and 13:24 today. This has now been resolved and our engineers are currently conducting a full investigation. A fault report will be posted shortly.
18th March 2014
An issue was reported regarding “one way audio” by a number of customers, starting at 12:08. The reported issues were diagnosed by Nexbridge, and found to be caused by one BT peer connection. The affected connection was removed from service at 12:16, and traffic rebalanced across remaining resilient connections to remove the impact to service.
The fault was then reported to BT at 12:30. This fault was investigated and declared as a serious incident by BT at 12:40.
BT escalated the issue to their 2nd/3rd line support teams and to their equipment vendor, and the problem was declared resolved by BT at 14:52.
Nexbridge apologises for any brief impact to service between 12:08 and 12:16 whilst we rebalanced a small percentage of traffic away from the faulty BT connection.
23rd January 2014
An issue has been reported with inbound network calls. This was due to an issue with one of Nexbridge’s session border controllers. This issue has now been resolved as of 12:54.
17th January 2014
Following reports of intermittent one way audio on some calls, we have identified an issue with one of our upstream peers. We have removed this peer and will not return it to service until fully tested.
BT have confirmed an issue with an S3 interconnect. This issue has been resolved.
3rd January 2014
Call quality issues were reported today at around 14:00. These issues were caused by a failure in upstream peer routing. This has now been resolved.
19th November 2013
We experienced a short outage on both our inbound and outbound services. This was due to a fault with one of our upstream peers.
25th October 2013
Our customer service number is currently unavailable when dialing from within the nexbridge network, it is however available from anywhere outside the network. If trying to contact us please call from the PSTN.
17th October 2013
Following the fault report provided previously regarding issues on the evening of the 16th October 2013, call routing for a number of our customers has been intermittently impacted due to further post dial delay issues. This repeat fault did not become apparent until 10:10 on the 17th October and only came to light as increase in call volumes were seen.
Our engineers were therefore able to diagnose the issue in real time on this occasion, and following checks on all hardware, again, no alerts or alarms were present. The fault was therefore investigated in detail at both Open SIPS and Asterisk code level and, following detailed diagnostics, an issue found within Asterisk with domain address lookup responses from Google.
It was identified that as volumes increased, a DNS lookup, which is carried out on every call setup, was being delayed by up to 20 seconds.
On further investigating Asterisk code, a bug was found where an IP address is provided for call routing, however, a DNS lookup is triggered. This lookup is sent to Google’s DNS servers, and as the volume of lookups was seen to increase, it appears that a time delay was being applied by Google due to the volume of lookups being sent; and the fact that they were IP address lookups rather than domain name lookups. This had not been seen previously as call volumes during the previous investigation on the evening of the 16th October 2013 were lower than during the working day, and hence lookups were seen to be performing successfully when tested.
The DNS lookup was moved to an internal server, and the issue resolved. This fix was rolled out across all servers that utilise DNS lookup, from 11:30 on 17th October 2013.
Our engineers will also be pushing the bug up as a request to the Asterisk community to aid in the deployment of a change to the code to remove DNS lookup requests when an IP address has already been specified.
Nexbridge sincerely apologises for this disruption to your services; we are confident that this issue has now been resolved.
16th October 2013
The Nexbridge network suffered a service issue affecting a number of our customers at 17:40 on the 16th October 2013. The initial indication of a problem, was an increase in our network management tools of high numbers of unanswered calls (clear code 487). There were no network alerts, server errors, network errors, or any indication at this stage of a network problem – simply that more calls than usual were being cleared down prematurely.
Our technical team therefore contacted BT to determine if there were any known number issues on calls to mobile networks, as the majority of the affected calls were to mobile numbers. BT carried out some live testing, and could not identify any significant issues in to the PSTN, however did report that they were seeing unusual call behaviour on certain calls from Nexbridge, indicating that a number of our customers were clearing calls down after dialling, but before being connected.
At around 18:00 on the 16th October 2013, we also received a number of customer reports of calls failing to connect. On further investigation, it appeared to be as a result of intermittent delays during the call setup process for outbound calls, which exhibited itself in callers experiencing no ring tone when making a call. Inbound calls were not affected. We discovered that the calls were eventually being routed and connected successfully, however the post dialling delay was in some cases up to 25 seconds, hence customers would have assumed that the call was not being connected, and cleared down.
Out technical team then investigated the cause for the slow call setup, however, when making test calls, were unable to reproduce the delay in call setup from our test environment, or from our Indirect Access Service (IDA). It was then noticed that calls were only failing from certain call servers, and the delay was more noticeable on servers with higher call volumes. As traffic volume was tailing off over the evening, the delays became reduced until at around 20:30 we could see minimal post dialling delay – again indicating the volume related aspect to this fault.
As traffic was now reducing, and since there were no alarms on our network or servers to indicate any issues, our engineers attended the data centre and liaised remotely to investigate if any environmental or physical issue may be contributing to the failure, however no issues could be seen.
The hardware at the Data Centre was therefore fully health checked on-site, and it was noted that the throughput of the main data switch was not performing at its optimum level, however all other devices were clear. The data switch was therefore factory reset, restored from backup, and rebooted during the early hours of the 17th October, at which point the performance of the device was seen to improve considerably.
Nexbridge therefore believes that there was a hardware performance issue with the main data switch that caused delayed packets to be transmitted across its interfaces, hence causing delays in call setups between certain devices.
This was a complex issue to diagnose and rectify, and we apologies sincerely for the impact this has had on your service during the evening of the 16th October.
11th September 2013
Any customer with mobile numbers in the ranges: 07700090, 07700092, 07700094 and 07978220 may experience some issues with return calls to these numbers. We are aware of the issue and are working with the up stream provider to resolve this.
No outbound service is affected with these numbers.
27th August 2013
Due to an upstream peer routing issue some inbound calls may fail to deliver or suffer from bad audio quality. Our provider is investigating and we will update this page as soon as we have more information
Update from C4L:
On Tuesday 27th August at 16:37 BST C4L monitoring systems reported the total loss of connectivity to the device “Splinter” located at Telecity Williams. Additionally monitoring systems detected VLAN topology changes, spanning-tree root port changes and the loss of OSPF infrastructure links to the M247 datacentre. C4
L engineers contacted remote hands in Telecity Williams to investigate urgently. This exposed the fact that remote hands engineers were performing remedial works within the C4L network rack and had dislodged the power feed to the device “Splinter”. After restoring power to the device “Splinter” C4L monitoring systems reported the device “Splinter” as being back in operation. Further testing showed that traffic was passing normally.
Customers whose services traverse the infrastructure links connected to this device would have seen packet loss and service disruption whilst the network re-converged. Customers with only a single homed connection to this device would have seen service disruption for the duration of the outage.
30th July 2013
As part of an upgrade to our iConsole platform last night a configuration issue caused some call routing issues.
This was reported to us at 8:31 and resolved by 08:48. We apologise for any inconvenience caused and will be looking at changing our internal deployment structure to stop this issue again in the future.
29th May 2013
A localised issue with the BT IP Exchange platform has been identified this afternoon, with no ring tone being experienced in certain geographic area codes. BT has carried out initial diagnostics, and identified a faulty route which has now been removed from service to prevent further issues.
Sincere apologies for any interruption that this isolated BT issue may have caused you or your customers.
22nd May 2013
Nexbridge received customer reports of intermittent voice quality issues at around 13:10 today. Following investigation by our engineering team, and through liaison with BT, a fault was identified within the BT IP core network affecting a number of carriers utilising the BT IP backbone. BT believe the issue has now been resolved – this was following the latest update at 13:54.
We sincerely apologise for any interruption to your service as a result of the issue within the BT Network.
30th April 2013
Nexbridge experienced intermittent connectivity issues for a number of customers connected to our Reynold’s House Data Centre this evening between 17:00 and 17:35. This was due to an intermittent failure with an IP Transit provider at this site. The issue has been resolved and full service restored. Nexbridge sincerely apologise for any interruption caused to customers affected by this issue.
Update at 20:41 30th April 2013 – Please see the following from our IP provider: This letter is in response of the network event that raised in Cogent Network on 30/04/2013. Around 18:00 CEST, some customers connected at Slough and Manchester may have detected intermittent connectivity and/or routing issues. After preliminary investigations, we detected that the issue was caused by a CPU spike raised in one of our core routers at Slough, due to an abnormal CPU consumption of one of the CPU processes. Some customers connected in Manchester might have been impacted as well, as automatic routing convergence for traffic using default path via Slough was not being triggered due to the status of the device at Slough. Traffic was manually forced to alternative path and Manchester traffic returned to normal behavior around 18:10 CEST. Right after, our IP engineering department was able to detect and shut down the conflictive process and CPU levels and traffic on this device went back to normal status around 18:25 CEST. All relevant information has been already submitted to the vendor for further investigations. Your service should be working properly at this time. Please accept our apologies for any inconvenience this issue may have caused.
Should you be experiencing any technical problems, please contact us.