EU Gateways – Persistent Rubberbanding + Packet Loss Near Origin (WinMTR if required)

Title:
EU Gateways – Persistent Rubberbanding + Packet Loss Near Origin (WinMTR attached)

Description:

I am experiencing persistent rubberbanding on all EU gateways in Path of Exile 2. The issue has been reproduced under multiple controlled conditions and across two independent internet connections.

Symptoms:

Severe rubberbanding (server correction resets)

Visible in-game latency spikes

Occurs in all instances (maps, hideout, town)

Not limited to peak hours

Environment:

ISP: Deutsche Telekom (Germany)

Two separate households (direct neighboring properties)

Two different routers (FritzBox and Speedport Pro)

LAN and WLAN both tested (identical behavior)

IPv6 disabled (no change)

Bufferbloat test result: Grade A

VPN Testing:

Mullvad VPN tested (Frankfurt and Amsterdam exits)

No improvement

All EU gateways tested: Frankfurt, Amsterdam, Milan, London

No gateway-specific improvement

Other online games:

No packet loss

No rubberbanding

Stable latency

Network Diagnostics:

Active session endpoint:
173.233.129.236:21360 (PathOfExileSteam.exe)

Traceroute:
Telekom → GTT (Munich) → Cloudflare → backend → origin
Baseline RTT: ~33–40 ms

WinMTR during active lag:

0% packet loss up to Cloudflare edge

Packet loss begins at 188.42.187.111

3% packet loss continues to final hop (173.233.129.236)

Average RTT increases from ~33 ms to ~58–59 ms at destination

Worst spikes exceed 1500 ms

This indicates packet loss occurring behind the Cloudflare edge, near the origin or hosting segment.

Because:

Two independent connections reproduce the issue

VPN ingress variation does not change behavior

Loss begins near final hops and propagates to destination

Only PoE2 traffic is affected

This appears to be either:

Origin/backend packet loss

Cloudflare → origin interconnect congestion

EU cluster network instability

If needed:

WinMTR logs captured during active rubberbanding

Please investigate packet loss and backend network behavior on EU session nodes corresponding to 173.233.129.236 (or equivalent).

I can provide additional logs or packet captures if required.
Last bumped27 февр. 2026 г., 03:47:56
Also I just died in a crucial map due to a latency spike. I have been experiencing these issues since launch, and the persistent rubberbanding significantly affects gameplay and progression. If this problem cannot be resolved or acknowledged, I may unfortunately have to stop playing PoE2, despite wanting to continue supporting and playing the game.
Hello,

Paris server is also concerned by this issue. I encountered huge spike latencies on it last evening.

A smooth gameplay needs rock solid server. I dream of it :)
You can try to set DNS via the 1.1.1.1 or 4.4.4.4 or 8.8.8.8 - this is routing. Your isp send packets to other isp. He is out of control of this when next isp is taking packets. Also yo can check MTR config. Usually 1450 and not 1500 it is. Smaller packets group to hit destination.
Последняя редакция: Biq#2171. Время: 25 февр. 2026 г., 06:33:12
Thank you for the suggestions regarding DNS and MTU. I have tested both points to rule them out properly.

1) DNS (1.1.1.1 / 8.8.8.8)
Changing DNS does not affect an already established game session, as DNS is only used for hostname resolution. Once the client connects to a specific session endpoint (in my case 173.233.129.236:21360), traffic flows directly to that IP.

Since the packet loss appears during an active session and is measurable via WinMTR to the resolved endpoint, DNS selection should not influence the observed behavior. For completeness, I can still test alternate DNS providers, but from a technical standpoint this is unlikely to resolve transport-layer packet loss.

2) MTU Testing

I performed proper MTU discovery using:

ping 173.233.129.236 -f -l 1472

1472 bytes failed due to fragmentation (as expected with PPPoE).
1464 bytes succeeded without fragmentation.

This results in:

1464 + 28 bytes (IP/ICMP header) = 1492 MTU

An MTU of 1492 is completely normal for Telekom DSL (PPPoE).
There is no unusually low MTU (e.g. 1400 or below) that would indicate a path MTU issue.

Therefore, MTU configuration appears correct and not abnormal.

3) Remaining Observation

WinMTR captured during active rubberbanding shows:

0% packet loss up to Cloudflare edge

Packet loss begins at 188.42.187.111

3% packet loss continues to the final destination (173.233.129.236)

Worst-case latency spikes >1500 ms

Because the loss begins behind the Cloudflare edge and propagates to the final hop, this does not appear to be a local MTU or DNS issue.

Given that:

Two independent Telekom connections reproduce the issue

VPN ingress variation does not change behavior

Other games are unaffected

MTU is normal (1492)

The most plausible remaining cause is packet loss or congestion near the origin/hosting segment rather than a local configuration issue.

I appreciate the input and am open to further technical suggestions, but based on the data collected so far, DNS and MTU do not appear to be the root cause.
Additional technical findings after further structured testing:
I was able to precisely characterize the behavior during lag spikes, and the pattern strongly suggests server-side simulation stalls rather than a pure routing or client-side issue.
Observed behavior during spikes:
• FPS remains completely stable.
• In-game latency graph spikes abruptly to very high values.
• The game world freezes (no entity movement, no combat resolution).
• After several seconds, the game resumes and rapidly “catches up”, replaying actions in accelerated succession.
This pattern is fully consistent and reproducible.
Key characteristics:
1. The spike is sudden, not gradual.
2. There is no gradual packet degradation beforehand.
3. Client performance remains unaffected.
4. The catch-up phase happens instantly once the spike ends.
5. In Endgame, smaller spikes occur periodically (approximately every 30–50 seconds), with occasional longer stalls.
Interpretation:
This behavior aligns with a server-side tick stall or backend blocking event:
• The authoritative simulation appears to pause.
• The client waits for server state updates.
• Once the server resumes processing, buffered state updates are transmitted.
• The client reconciles and fast-forwards simulation.
This does not resemble typical transport-layer packet loss, which would usually manifest as jitter, retransmission variance, or irregular packet delay rather than consistent freeze → catch-up behavior.
Campaign vs Endgame pattern:
During campaign progression:
• Lag behavior appeared more instance-dependent.
• Creating a new instance occasionally improved behavior.
During Endgame:
• Spikes are more persistent.
• Smaller spikes appear periodically (approx. every 30–50 seconds).
• Larger stalls occasionally occur where the simulation completely halts.
The periodic nature in Endgame is particularly notable. Network congestion typically does not manifest in regular intervals. However, periodic backend tasks (e.g., state serialization, database persistence, cluster synchronization, or garbage collection) can produce such timing patterns.
Additional context from prior diagnostics:
• Two independent Telekom connections (neighboring properties) reproduce the issue.
• LAN and WLAN both tested.
• VPN (Frankfurt and Amsterdam exits) does not alter behavior.
• MTU verified at 1492 (normal for PPPoE).
• WinMTR shows packet loss beginning near the origin/backend segment, not upstream in Telekom or Cloudflare edge.
Taken together, the evidence suggests that the issue is more likely located in:
• EU cluster node performance under certain load conditions,
• backend I/O or persistence interactions,
• or Cloudflare → origin interconnect / origin processing layer.
Given the confirmed freeze → catch-up behavior with stable FPS and abrupt latency spikes, this appears consistent with server-side simulation stalls rather than client configuration or local routing.
I am willing to provide additional logs, time-stamped reproduction windows, or packet captures if that would assist further investigation.
Thank you for taking a closer look at this

Пожаловаться на запись форума

Пожаловаться на учетную запись:

Тип жалобы

Дополнительная информация