TCP Congestion Control – again

This topic keeps coming up again and again:

Which TCP congestion control algorithm has the best performance?
Which handles high latency, delays, and bottlenecks better?
And some might ask, what does it mean when the congestion window becomes zero? AH…

Yes, it’s true — no matter how much technology improves, how fast chipsets and processors become, or how advanced fast-forwarding gets — congestion still happens.

Today’s topic isn’t about local datacenter LAN-side congestion (which may be caused by high network usage in clustered, fabric-based structures — AI workloads, maybe?).

We’re talking about long-haul network connections — like between two distant endpoints. Take Starlink, for example: a satellite-based system with unpredictable latency and environmental effects.

So, how are these “OLD” TCP congestion control algorithms still being used today?


Can monitoring TCP congestion control behavior — like how the congestion window changes — be used to detect potential congestion and trigger a BGP or transit path switch?
Maybe — but it depends on the access technology. If the last mile is wireless, measurements might not be reliable enough.


So, which one is best suited for the modern era?

Here’s a list of Linux-supported TCP congestion control algorithms (summarized by ChatGPT):

highspeed: Designed for networks with large bandwidth-delay products.

reno: The traditional TCP congestion control algorithm.

cubic: Default in Linux since kernel 2.6.19; optimized for high-speed networks.

bbr: Developed by Google (since kernel 4.9); focuses on bottleneck bandwidth and round-trip time.

bic: Binary Increase Congestion Control; predecessor to CUBIC.

htcp: Hamilton TCP; designed for high-speed, long-distance networks.

vegas: A delay-based algorithm that adjusts the sending rate based on RTT variations.

westwood: Optimized for lossy networks; adjusts congestion window based on bandwidth estimation.

yeah: Yet Another Highspeed TCP; combines delay and loss-based congestion detection.

hybla: Improves performance over high-latency networks by compensating for RTT.

illinois: Dynamically adjusts the congestion window for high-speed, long-distance links.

lp: Low Priority; designed for background traffic to yield to more important flows.

veno: Hybrid of Reno and Vegas; optimized for wireless networks.

scalable: Increases congestion window more rapidly than traditional TCP.

nv: New Vegas; an improvement over TCP Vegas.

cdg: Congestion Distance; uses delay gradients to detect congestion.

dctcp: Data Center TCP; optimized for data center networks using ECN.

www.pingnetbox.com project

PingNetBoxhttps://www.pingnetbox.com is developed using Python and PHP, and it runs on a Raspberry Pi to act as a probe for network measurement.

The source code is available at: https://github.com/royleung01

It’s easy for me to share this small-scale measurement tool with a web interface and reporting features. Of course, compared to RIPE Atlas, this is a much smaller project—on a completely different scale.

The Raspberry Pi operates with low power consumption while delivering high performance. It fulfills basic network testing needs and can also function as an iPerf probe. Since it’s Linux-based, tools like Speedtest-CLI can also be used for measurements.

It seems promising! I’m thinking of developing PingNetBox version 2 with more embedded features for the Raspberry Pi.

Rasp-SDWAN, perhaps? IPSEC?

#raspberryPI #speedtest #RIPE #atlas #IP #network #measurement #Internet #github #IPSEC

Can AI Operate the Internet?

Skynet? No, no Terminator, please.

Just joking—don’t jump to the movie scene right away. Let’s consider the idea of AI operating a network, similar to how automation handles network device configuration and zero-touch provisioning. Is it possible?

Let’s start with automation. A device comes with an initial configuration that includes an IP address and gateway. Once it’s network-accessible, the central server recognizes the device and pushes the final configuration. That’s a simple flow, right?

But what happens next—who operates the device?
Today, it’s still a human network operator.

Could AI replace the network operator? It depends on how the AI model is trained, how it handles fault tolerance, and how it avoids operational errors. There’s no such thing as 100% guarantee.

However, using AI to operate parts of the network—such as traffic tuning, AS path changes, or metric modifications—seems realistic.

AI can assist with tasks like network measurements, route and traffic tuning, DDoS detection and auto-mitigation (which already exists), device fault detection and traffic redirection, and DNS updates. That all seems feasible. You could train an AI module specifically for this purpose.

But then again—do we need AI if existing technologies already meet these requirements?

Let’s consider a scenario and see what AI (specifically, ChatGPT) can do.


Scenario:

  1. An Internet provider has 2 transit links:
    • Link A: 1 Gbps
    • Link B: 10 Gbps
  2. The subnets 10.0.0.0/24 and 11.0.0.0/24 (AS 65000) are advertised to Link A.
  3. The same subnets, but with AS path prepending (65000, 65000), plus 12.0.0.0/24 (65000), are advertised to Link B.
  4. Link A is congested—11.0.0.0/24 is using nearly 900 Mbps.
  5. Link B’s total usage is just 5 Gbps.
  6. A customer on 10.0.0.0/24 complains they cannot access the Internet.

ChatGPT’s Suggested Operation:

Summary:

  • Remove or reduce AS path prepend on Link B.
  • Optionally add prepends on Link A.
  • Let traffic shift to Link B (10 Gbps).
  • This will relieve congestion on Link A.
  • It should also restore Internet access for users on 10.0.0.0/24.

Business Follow-Up:

“From a technical perspective, that works. But from a business point of view, Link A is now underutilized. If Link B has a higher cost per bps, this is inefficient.”

ChatGPT’s Response:

AspectStrategy
Technical QoSOffload heavy prefixes like 11.0.0.0/24 to Link B
Cost ControlKeep lighter prefixes like 10.0.0.0/24 on Link A to avoid overusing B
Business FitAvoid paying premium for B unless capacity is needed
Scalable OptionAutomate prepend adjustments using monitoring-based thresholds

Yes, that’s a great example of how AI can analyze your needs if you provide enough detail and criteria.

The remaining concern is AI’s operational cost and power consumption.

This topic definitely deserves more discussion—SD-WAN + AI?

#BGP #bgptrace #AI #chatGPT #AS #ASPrepend #path #link #transit #INTERNET #SDWAN

Test with ECMP on Linux – Part 2

Continuing from the previous article on ECMP on a Linux machine, the goal is to enhance overall performance, but it will always be limited by the session-wise characteristics of traffic.

Surprisingly, something interesting happens during a multiple-flow Speedtest. When a single test generates multiple traffic flows using different port numbers (Layer 4), the aggregated result shows a higher combined throughput—regardless of whether the traffic is routed through two different public IPs with NAT.

For example, consider two links:

  1. 150Mbps Download / 30Mbps Upload (DIA)
  2. 30Mbps Download / 30Mbps Upload (DIA via public WiFi)

A Linux gateway is configured to use ECMP with two next-hop routes pointing to these links. When the traffic test starts, the portal or app will display only one of the link’s public IP addresses. However, the test results show 170+ Mbps download and 40+ Mbps upload. WOOOO!!!

Of course, this is just a traffic test—similar to running iPerf with multiple flow tests and aggregating the results. So, yes, it’s possible!

I believe some vendors use similar techniques for load sharing. Since it’s Linux-based, that shouldn’t be an issue.

Any other ideas? I’m also thinking about UDP-based video streaming—should we be considering application-layer optimization?

#ECMP #internet #NAT #IP #loadsharing #BGP #DIA #Speedtest #IPERF #measurement #traffictest

Test with ECMP on Linux

I was reading an article about ECMP (Equal Cost Multipath) for traffic load sharing, and it brought back memories of my previous traffic engineering tests. It seems simple at first glance, but it’s actually more complex—especially when it comes to policy-based routing.

The challenge lies in determining traffic redirection and sharing in a session-wise connection, whether with or without NAT, across multiple links or circuits with different latencies. There’s also the complication of firewall interception with asymmetric return traffic. These factors make achieving ideal traffic load sharing quite difficult.

Of course, if tunneling is involved, things get simpler. It essentially blinds both endpoints and allows you to add two routes with the same metric in overlay routing. However, it doesn’t clearly explain why load-sharing performance behaves the way it does.

What about service enhancement? If the primary link becomes congested, should the secondary link pick up some of the traffic? That’s not exactly round-robin behavior—it would require active measurement and monitoring of the links. Maintaining session flow on the primary link while redirecting new flows to the secondary link sounds ideal, but it’s difficult to implement. For MPLS-TE, that’s straightforward—but what if you have two internet links, like one DIA (Direct Internet Access) and one mobile network? How would you handle that?

Well, just for fun, I haven’t done any serious measurements yet. But after setting up load sharing on my node, it seems to be working—though I haven’t really thought through the next steps. Running a Speedtest shows that the flows (by ports) are transmitting separately. Hmm… not ideal, but not bad either. But what about other applications? If they’re using two different IP addresses for outgoing traffic—ahhhh…

Let’s discuss this, bro.


Enable 2 Multipath load sharing
sudo ip route add default scope global \
nexthop via 192.168.X.X dev XXX weight 1 \
nexthop via 192.168.X.X dev XXX weight 1

For multipath routing, disabling connection tracking for locally-generated traffic helps
sudo sysctl -w net.netfilter.nf_conntrack_tcp_loose=0

Enable Layer 4 Hashing
sudo sysctl -w net.ipv4.fib_multipath_hash_policy=1

Enable IP Forwarding
sudo sysctl -w net.ipv4.ip_forward=1

Force More Aggressive Flow-Based Balancing:
Set rp_filter to 0 (disable reverse path filtering) so the kernel won’t drop asymmetric traffic

sudo sysctl -w net.ipv4.conf.all.rp_filter=0

Flush all route cache
sudo ip route flush cache

#ECMP #Linux #Internet #Routing #IP #Firewall #Tunneling #MPLS #trafficEngineering #ChatGPT

Looking glass function provided by RIPE Atlas?

I performed some traceroute tests using the public looking glass of another organization/provider. I found that some test functions, like Ping and Traceroute, were launched using RIPE Atlas probes. It looks impressive and kind of funny.

In the previous year, the provider developed a web interface and API to launch commands from their own PE (Provider Edge) or Internet BG (Border Gateway) routers and return the results. The geographical router list allows users to select region-based tests.

This seems to be a new method using RIPE Atlas, where queries can be made via an API. The web interface lets users select which probe to use for the measurement, deducting the web provider’s “RIPE Atlas Credits” for each test.

However, I’m wondering — since looking glass aims to provide insights into a specific network provider’s or AS owner’s network — if we’re using this method, why not just go to the official RIPE Atlas website to launch the test?

Well, I guess the more user-friendly web portal makes it easier for users.

Pingnetbox – http://www.pingnetbox.com

#ripe #atlas #lookingglass #measurement #ping #traceroute #test #internet #AS #chatgpt #proofreading

Starlink Satellites’ Movement Proven by Periodical Measurement – Part 2

Tuning the measurement to 5 minutes each, the result portal summarizes the data in a single file by RIPE Atlas Probe ID.

The results show a predictable pattern of latency changes with increases and decreases, which may indicate satellite movement. We assume that the latency between the ground station, CDN server, and client site remains constant (unless under a DDoS attack… um…).

With the current resources available on RIPE Atlas, can we compare country-based latency and service levels of Starlink? Ah, that should probably be done by the Starlink NOC…

https://www.bgptrace.com/atlas/starlink

#starlink #CDN #cloudflare #satellites #ping #latency #movement #probe #RIPE #atlas

Starlink Satellites’ Movement Proven by Periodical Measurement

Using the Atlas RIPE probe (what a great network measurement platform!), we selected the probe, which uses Starlink to continuously measure connections to CDN servers.

We assume that, no matter which Starlink satellites are passing over the area, the network service connection will still be provided to the same region. For example, if the satellites are crossing the US regions, it doesn’t matter which satellite; it will send data back to a US-based station on the ground.

The test seems a bit funny, but the latency trend appears to follow a pattern. It shows a progression from high latency to low latency and then back to high latency over time. Assuming that the ground station link is a fixed connection to destination CDN server, the latency remains constant. Therefore, the movement of the satellites affects the latency. When the latency decreases, it suggests that another satellite has taken over that area, and the roaming process is complete.

You can think of this like when you are using a mobile device. As you move from one cell site (A) to another (B), roaming occurs, which registers your device from Cell Site A to Cell Site B. This is a similar process with satellites.

Now, back to the Starlink client probe: if its location doesn’t change, then as the satellites move through space, the distance between the satellites and the probe site will increase, and latency can indicate this. When the latency decreases, we may assume that another satellite has taken over the service coverage (similar to the roaming process). This is because the satellites do not move backward.

Moreover, does the change in latency over time affect the user experience?
For instance, during a video or voice call, latency may fluctuate—increasing or decreasing.

However, live gameplay presents a different scenario. Unlike calls, it often relies on a stable connection. A fixed connection typically doesn’t exhibit the same fluctuating physical characteristics, making latency more predictable in gaming environments.

Currently, measurements are taken every 15 minutes. If we shorten this test period, we may get more accurate insights into this operation.

https://www.bgptrace.com/atlas/starlink

#starlink #satellites #probe #RIPE #atlas #internet #measurement #roaming #cellsite #cell #mobile #ping #latency

How do you troubleshoot a network problem? Cabling? Configuration?

As a Wide Area Network (WAN), the circuit provided by telecom backhaul between two endpoints—whether it’s point-to-point between two sites (EVPL, SDH) or customer site to provider PE (Internet, IPVPN, VPLS, etc.)—should be connected to the provider’s equipment or router devices to deliver the service. If you’re referring to dark fiber in a limited area… um… okay, next.

How do you verify the circuit service? Check your site router configuration? Check your IP routing?

The basic mindset: I believe we should start by checking the cabling. Yes, Layer 1, isn’t it?

If your port is UP and able to send and receive packets, WELL, at least confirm both endpoint IP addresses and perform a ping test. (Yes, a PING test—please don’t tell me you don’t know what PING is.)

PIC from #Google

From past experience, field engineers often argue that the device configuration is incorrect, but guess what? The issue ends up being the WRONG port connected.

Therefore, HOW IMPORTANT IS PHOTO CAPTURE!!!!!!!

What if the ping fails? Yes, it happens—cable quality issues, loose connectors, poor signaling, etc.

Have you ever checked the DUPLEX setting????????????? Confirm both ends have the SAME duplex setting!!

PIC from Cisco Press

Then you’ll mention bandwidth: “Speedtest.com, huh? Why can’t I get full bandwidth?!”

Please understand: we cannot guarantee a test server will allocate all resources for your test. The Internet is unmanaged, and you need to be aware of overhead and your device’s processing power. Do you really think your mobile can hit 2Gbps over Wi-Fi, bro?

For standard testing, running tests between the client site and the ISP backhaul provides a great reference for your service quality—this is typically done during installation.

But anyway… PLEASE confirm the cabling is correct before spending too much time checking the configuration. Start with Layer 1 first!

#circuit #physical #cabling #ISP #provider #Internet #EVPL #IPLC #IPVPN #P2P #IP #testing #ping #bandwidth #speedtest #traffic #packetlost #duplexing #router #WIFI

Model Training on AMD 16-core CPU with 8GB RAM running in a virtual machine for Bitcoin Price Prediction – Part 2 – Updated

Continuing with Over 500,000+ Data Points for Bitcoin (BTC) Price Prediction

Using the Python program, the first method I tried was SVR (Support Vector Regression) for prediction. However… how many steps should I use for prediction? 🤔

Previously, I used a Raspberry Pi 4B (4GB RAM) for prediction, and… OH… 😩
I don’t even want to count the time again. Just imagine training a new model on a Raspberry Pi!

So, I switched to an AMD 16-core CPU with 8GB RAM running in a virtual machine to perform the prediction.

  • 60 steps calculation: Took 7 hours 😵
  • 120 steps: …Man… still running after 20 hours! 😫 Finally !!! 33 Hours

Do I need an M4 machine for this? 💻⚡

ChatGPT provided another approach.
OK, let’s test it… I’ll let you know how it goes! 🚀

🧪 Quick Example of More Time Steps Effect

Time Step (X Length)Predicted AccuracyNotes
30⭐⭐⭐Quick but less accurate for long-term trends.
60⭐⭐⭐⭐Balanced context and performance.
120⭐⭐⭐⭐½Better for long-term trends but slower.
240⭐⭐Risk of overfitting and slower training.

#SVR #Prediction #Computing #AI #Step #ChatGPT #Python #Bitcoin #crypto #Cryptocurrency #trading #price #virtualmachine #vm #raspberrypi #ram #CPU #CUDB #AMD #Nvidia