Why not "amortise" the period of sending keystrokes -- buffer them in a queue, and process the queue for sending these on a regular (and short enough for the human at the client end feeling the interactivity) interval, so there's no latency difference between sending an 'a' vs a 'q' and so on. If we assume some average typing speed on the bell curve, say, around 250 keystrokes per minute, the queue can be picked for sending every 250 milliseconds or so. That solution wouldn't require injecting extra packets on the network. What am I missing?