Website - Operational
Website
Optua Sign - Operational
Optua Sign
Optua Resolve - Operational
Optua Resolve
Openreach - Operational
Openreach
CityFibre - Operational
CityFibre
Freedom Fibre - Operational
Freedom Fibre
Vodafone - Operational
Vodafone
TalkTalk - Operational
TalkTalk
ITS - Operational
ITS
FullFibre - Operational
FullFibre
Glide - Operational
Glide
Virgin Media - Operational
Virgin Media
Gigaclear - Operational
Gigaclear
Netomnia - Operational
Netomnia
Outbound UK Landline Calls - Operational
Outbound UK Landline Calls
Outbound UK Mobile Calls - Operational
Outbound UK Mobile Calls
Outbound UK Non-Geographic Calls - Operational
Outbound UK Non-Geographic Calls
Outbound International Cals - Operational
Outbound International Cals
Inbound Calls to UK Landline Numbers - Operational
Inbound Calls to UK Landline Numbers
Inbound Calls to UK Non-Geographic Numbers - Operational
Inbound Calls to UK Non-Geographic Numbers
Inbound Calls to International Numbers - Operational
Inbound Calls to International Numbers
Optua Softphone - Operational
Optua Softphone
Voicemail - Operational
Voicemail
Hardware services - Operational
Hardware services
Notice history
Mar 2026
No notices reported this month
Feb 2026
- PostmortemPostmortem
Summary
On 24 February 2026, a configuration change to the BIRD2 routing daemon on our primary BGP router caused a mass re-export of approximately 1.1 million IPv4 routes and 250,000 IPv6 routes to the Linux kernel. The re-export consumed BIRD's single processing thread for an extended period, preventing it from responding to BGP
keepalivemessages from peers. This resulted in widespread session timeouts across our peering and transit infrastructure, with established session count dropping from approximately 60 to fewer than 20 at the lowest point.All sessions recovered automatically once the change was reverted. No customer traffic was permanently lost, though customers may have experienced degraded routing and increased latency during the incident window.
Background
Optua operates AS202076 from a virtual private server hosted by one of our partners in Glasgow. The VPS runs BIRD2 as its BGP daemon and carries a full routing table from multiple transit providers and Internet Exchange Points including LONAP, LINX LON1, LINX LON2, and BGP.Exchange. At the time of the incident, the router was handling over 1.1 million IPv4 prefixes and approximately 250,000 IPv6 prefixes across 60 BGP sessions.
The change was part of a planned maintenance task to configure a RIPE Atlas software probe on the network. The probe needed to source its traffic from Optua's own IP address space rather than from our partners' infrastructure addresses. To achieve this, a kernel preferred source attribute (
krt_prefsrc) was added to the BIRD kernel protocol export filter, which instructs the Linux kernel to stamp a specific source IP on every route in the routing table.Root cause
BIRD2 is single-threaded for route processing. When the kernel protocol export filter was changed from "export all" to an export filter containing a
krt_prefsrcdirective, BIRD did not apply this change incrementally. Instead, it triggered a complete re-export of every route in the routing table to the Linux kernel. This meant approximately 1.35 million routes were queued for update simultaneously.While BIRD was processing this queue, it could not service any other tasks, including responding to BGP
keepalivemessages from peers. Most BGP sessions are configured with hold timers between 90 and 180 seconds. As the re-export took longer than these timers to complete, remote peers began closing sessions due to hold timer expiry.Each closed session then generated additional work for BIRD in the form of route withdrawals and, once the peer attempted to reconnect, full session re-establishment and route re-advertisement. This created a cascading effect where the initial churn generated further churn, prolonging the recovery time.
Timeline (all times UTC)
00:59 - Configuration change applied via
birdc configure. Thekrt_prefsrcdirective was added to both the IPv4 and IPv6 kernel protocol export filters simultaneously.01:00 - BIRD begins re-exporting the full routing table to the Linux kernel. The birdc command line interface becomes sluggish.
01:05 - First BGP sessions begin timing out. Grafana monitoring shows established session count declining.
01:10 - Session count drops below 40. Prefix counts on IXP route server sessions show significant reductions.
01:15 - Session count drops below 20. Decision made to revert the change immediately.
01:17 - Configuration reverted to the original kernel protocol export.
birdc configureapplied.01:20 - BIRD begins recovering. Sessions start re-establishing as the re-export queue clears and BIRD resumes normal
keepaliveprocessing.01:30 - Majority of sessions re-established. Prefix counts returning to normal.
01:35 - All sessions confirmed Established. Full prefix counts restored across all peers.
01:38 - Status page updated to reflect recovery. Decision made to reattempt the change using a safer approach.
01:45 - IPv6 kernel protocol updated with
krt_prefsrcin isolation. At approximately 250,000 routes, the re-export completed quickly with no session loss.02:00 - IPv4 kernel protocol updated with
krt_prefsrcseparately. Some brief session disruption during re-export of 1.1 million routes, but all sessions recovered within minutes.02:12 - RIPE Atlas probe confirmed connected under AS202076 with correct source addressing on both IPv4 and IPv6.
02:15 - Incident declared resolved.
Impact
Approximately 40 BGP sessions were lost during the peak of the incident. This included sessions with transit providers, IXP route servers, and bilateral peers at LONAP, LINX, and BGP.Exchange. During this window, traffic to and from AS202076 would have been routed suboptimally or, in some cases, may have been unreachable via certain paths. Customers relying on peering routes for low-latency connectivity would have experienced increased latency or brief interruptions.
No data was lost. No configuration was permanently altered. All sessions recovered automatically without manual intervention beyond the revert.
Contributing Factors
The change was applied to both IPv4 and IPv6 kernel protocols simultaneously, maximising the volume of routes requiring re-export.
The VPS carries a full BGP routing table from multiple sources, meaning the re-export volume was in excess of 1.35 million routes combined.
BIRD2's single-threaded architecture means any long-running operation blocks all other processing, including session maintenance.
No pre-change assessment was made of the likely duration of the re-export, or its impact on session hold timers.
The change was performed on a production router carrying live traffic without a maintenance window or pre-staged rollback plan.
Corrective Actions
The
krt_prefsrcchange has been successfully applied to both IPv4 and IPv6 kernel protocols by performing them sequentially rather than simultaneously, reducing the per-operation churn and allowing sessions to remain stable.The RIPE Atlas software probe is now operational under AS202076 and sourcing traffic from Optua's own address space on both address families.
A
systemd servicehas been created to ensure the required network interfaces and IP assignments persist across reboots.Planned actions
We are in the process of migrating our BGP infrastructure from the current virtualised server environment to a dedicated hardware router. This will provide a purpose-built routing platform with multi-threaded BGP processing, hardware-accelerated forwarding, and dedicated memory for the routing table. This migration will eliminate the class of issue experienced during this incident, as dedicated routing hardware handles kernel route table updates fundamentally differently from a Linux VPS running BIRD in userspace.
Hold timers on all BGP sessions will be reviewed and, where appropriate, increased to provide additional headroom during future maintenance operations.
A formal change management process will be introduced for any modifications to the kernel protocol configuration, including mandatory pre-change impact assessment and a documented rollback procedure.
Future maintenance involving bulk route table operations will be performed during a scheduled maintenance window with advance notice to affected peers.
Lessons Learned
The core lesson from this incident is that on a full-table BGP router running BIRD2 on Linux, any change to the kernel export filter is not an incremental operation. It triggers a complete re-export of the entire routing table. On a router carrying over a million prefixes, this is a significant event that can block BIRD's processing thread long enough to cause widespread session loss.
The secondary lesson is that applying changes to IPv4 and IPv6 simultaneously doubles the workload. Performing them sequentially, starting with the smaller table, allows the operator to gauge the impact before committing to the larger and riskier operation.
Finally, this incident reinforces the case for migrating to dedicated routing hardware, where the control plane and forwarding plane are separated and route table updates do not compete with session maintenance for processing time.
- ResolvedResolved
Network route table maintenance has been completed successfully. All BGP sessions are fully established and prefix counts have returned to normal across both IPv4 and IPv6. A full postmortem will be published shortly. No further action is required.
- MonitoringMonitoring
All BGP sessions have re-established and prefix counts are returning to normal levels for all IPv6 sessions. We are now moving to the second stage of the maintenance, which will include all IPv4 routes. Some customer disruption should be expected whilst this is completed.
- UpdateUpdate
The underlying issue has been identified and resolved. All BGP sessions have re-established and prefix counts are returning to normal levels. We will be reattempting the original maintenance in a controlled manner with some expected customer degradation whilst the maintenance is completed.
- IdentifiedIdentified
A fix has been identified and implemented, and we are now seeing recovery. Despite this, we do still see some latency and are looking to resolve this as soon as possible.
- InvestigatingInvestigating
During scheduled network infrastructure maintenance, a routing configuration change triggered a large-scale route table update across our BGP infrastructure. This has caused temporary increases in latency and a reduction in visible prefix count on some peering sessions. All upstream transit and peering sessions remain established. Routes are expected to fully reconverge within the next 30-60 minutes. No customer traffic has been affected.
- Completed17 Feb, 2026 at 1:39 AMCompleted17 Feb, 2026 at 1:39 AM
We're glad to report that maintenance was completed successfully with minimal downtime. Thank you for your patience and understanding.
- In progress17 Feb, 2026 at 12:40 AMIn progress17 Feb, 2026 at 12:40 AMMaintenance is now in progress
- Planned17 Feb, 2026 at 12:40 AMPlanned17 Feb, 2026 at 12:40 AM
Optua will be undertaking system upgrades and will be decommissioning certain devices that we no longer required on our network. This may affect our website, network (including our ASN), and any associated services that link directly to our network such as Optua Sign and Optua Resolve.
We do not expect any prolonged downtime for our customers, however, you should expect potential latency in connection as some of our network reconfigures and thereafter reroutes itself to provide you with the best connection it can.
ASN (AS202076) partners
We are going to be decommissioning one of our BGP devices from our network that hosts a number of upstreams, bilateral peers and IXPs. To ensure this goes smoothly, Optua has devised a complex migration plan to prevent any network issues that may affect our peers, IXPs or VIXPs directly.
We expect that our ASN may temporarily stop peering with some connected IXPs, VIXPs, and bilateral peers whilst this decommissioning takes process. We will meticulously monitor our BGP sessions to ensure no prolonged downtime. We expect that we will not disconnect from any sessions for loner than 5-10 minutes and we will automatically reconnect once your session has successfully migrated to our new BGP device.
Upon completion, you may notice a new /24 IPv4 block being routed to your session. We have recently acquired a new /24 IPv4 block and this is normal. Our RPKI may not be fully configured and therefore please ensure your filters remain in place and once our RPKI has been put in place it should route to you automatically. This includes IXPs and VIXPs.
If you notice any issues during this migration, please contact our Network Operations Centre immediately at noc@as202076.net or by phone at 0800 054 8330 which will be monitored continuously throughout the migration. We apologise for any inconvenience this may cause to you and thank you for your patience.
- ResolvedResolved
After review, we are confident that this incident has been resolved for the evening. Whilst we remain weary of the high latency rates currently showing on the Optua statuspage, we believe that the latency rates are not currently affecting any customers currently on the network and are solely the result of high latency at Optua's metric monitoring end.
We will raise this with the network during business hours. If you experience any issues in the meantime, please contact us at contactus@optua.co.uk and we will be happy to help. - MonitoringMonitoring
We have implemented a fix on our devices and are satisfied this issue is resolved. We remain monitoring the affected network and will review our fix if this issue persists. We thank you for your patience during this time.
- UpdateUpdate
We are experiencing latency levels at the same levels as we previously had believed we had identified the issue at hand. We are looking into the potential root cause of this issue.
- IdentifiedIdentified
We have identified the issue and are implementing a fix. We apologise for any inconvenience this issue is causing.
- InvestigatingInvestigating
We are investigating service latency with Netomnia causing ~19,000ms delays in responses to the service and reviewing customer impact. We do not expect any outage as a result of this.
- Completed14 Feb, 2026 at 3:30 AMCompleted14 Feb, 2026 at 3:30 AMMaintenance has completed successfully
- In progress14 Feb, 2026 at 3:00 AMIn progress14 Feb, 2026 at 3:00 AMMaintenance is now in progress
- Planned14 Feb, 2026 at 3:00 AMPlanned14 Feb, 2026 at 3:00 AM
We are planning to undertake some essential network upgrades during this time. This may affect components such as our network, website and support systems.. The customer control panel, broadband infrastructure/connectivity and cloud voice will not be affected.
NB for Optua ASN [AS202076]: We will be upgrading some of our routers during this time and therefore BGP sessions may temporarily disconnect. These should automatically re-establish with any bilateral peers and IXP route servers. If they do not, please contact noc@as202076.net for 24x7 assistance.
This should not take any longer than 30 minutes to complete.

