Let me clarify: We have a certain amount of latency when streaming games from both local and internet servers. In either case, how do we improve that latency and what limits will we run in to as the technology progresses?

  • Theoretically, the latency between the streamer and viewers could be zero or near zero.

    For playing games online, the minimum possible latency is the speed of light delay. We’re pretty much already at the limit for that one, and we’re even using a lot of pretty clever techniques to mitigate latency such as lag compensation.

    • Ooh, we’re not at the speed of light as a limit yet, are we? Do you mean “point A to point B” on fibre, or do you actually mean full on “routed-over-the-internet”? Even with fibre (which is slower than the speed of light), you’re never going in a straight line. And, at least where I live, you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly.

      •  jamiehs   ( @jamiehs@lemmy.ml ) 
        link
        fedilink
        English
        49 months ago

        For most of us, there is no difference though; you get what you get.

        I live in a nice neighborhood but I won’t ever get fiber… we have underground utilities and this area is served by coaxial cable. There’s no way in hell they are digging up miles of streets to lay fiber; you get what you get.

        My ISP latency is like 16-20ms but when sim racing it just depends on where the race server is (and where my competitors are). As someone on the US west coast, if I’m matched with folks in EU and some others in AUS/NZ, the server will likely be in EU and my ping will be > 200. My Aussie competitors will be dealing with 300-400.

        It’s not impossible to share a track at those latencies, but for close racing or a competitive shooter… errrr that just doesn’t work.

        The fact that I’m always at around 200ms for EU servers might be improved if we could run a single strand of fiber from my house to the EU sever (37ms!) but there would still be switching delays, etc. so yeah the speed of light is the limit, but to your point, there’s a lot of other stuff that adds overhead.

        • Theoretically it doesn’t really matter whether your connection is fiber or copper. Electricity moves through copper roughly at the same speed as light moves through fiber. The advantages that fiber has over copper is that it can be run longer distances without needing boosting, and that you can run an absolute fuckton more end-to-end connections in the same diameter of cable. More connections means less contention - at least at one end of the pipe. The problem then moves to the ISP’s routers :)

          I’d say that the chances are actually quite good that you’ll get fiber internet within the next 10 years. Whether or not it improves your internet connection is another question entirely!

          • It needs less boosting, fiber still needs repeaters over sufficiently long spans.

            Really the biggest advantage to fiber from a consumer perspective is that it’s not subject to signal deformation and interference. You don’t have nearly as many issues with fiber Internet as a result.

            • Sorry, what I wrote here was unclear, I wrote it needs less boosting in another comment, but re-reading this one, it does sound like I’m claiming it needs no boosting over any distance - that’s not what I meant though! I just meant that you can run an equivalent link without any boosting further than you could with copper.

              Interference isn’t actually that big of a deal for Ethernet over copper, unless the installer does something silly like run UTP alongside high power electrical lines, or next to a diesel generator, or something. Between shielding, the use of balanced signals, and the design of twisted pair, most interference is eliminated.

              • Interference isn’t actually that big of a deal for Ethernet over copper, unless the installer does something silly like run UTP alongside high power electrical lines, or next to a diesel generator, or something. Between shielding, the use of balanced signals, and the design of twisted pair, most interference is eliminated.

                This should be true, but in practice … there are a lot more environmental factors that can and do impact a copper cables (which can result in some really wacky situations to diagnose, “this only happens on hot days when XYZ part of the line to your house expands”), and more installation errors (e.g., not grounding the wire). That doesn’t matter much for TCP applications/protocols but for UDP applications/protocols that can all add up to be something that’s observable in the real world.

                You get a lot closer to “all” interference being removed with fiber … and for most gamers at least, that’s probably the most noticeable improvement on fiber vs “cable” service (other than perhaps a download/upload speed bump). Pings are in my experience roughly the same, though the fiber networks tend to fair a bit better (probably just from newer hardware backing the network).

                It’s becoming more of an issue too (in Ohio at least) because more and more ISPs are locking folks out of their modem’s diagnostics, so they can’t actually see that the modem is detecting signal quality issues coming into the house… I almost always recommend folks just go with fiber all the way into their house if they have the option, unless they just use the web and watch videos (in which case who cares, TCP will make it so you don’t care unless it’s really bad, and the really bad cases are typically fixed the first time the tech is out).

                It’s one of those things where there’s not much of a benefit for consumers on paper (theoretically – as you say – you could have copper service that’s just as good and fast as fiber) … but in practice, fiber just saves a lot of headaches for all parties because of its resistance to interference and simpler installation.

      • Even with fibre (which is slower than the speed of light)

        This makes no sense. Are you referring to the speed of light in a vacuum? Fiber transmits data using photons which travel at the speed of light. While, yes, there is often some slowing of signals depending on whether the fiber is single-mode or multi-mode and whether the fiber has intentionally been doped, it’s close enough to the theoretical maximum speed that it’s not really worth splitting hairs (heh) over

        There are additionally some delays added during signal processing (modulation and demodulation from the carrier to layer 3) but again this is so fast at this point it’s not really conceivably going to get much faster.

        The bottleneck really is contention vs. throughput, rather than the media or modulation/demodulation slash encoding/decoding.

        At least to the best of my knowledge!

        you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly

        That’s generally not how routing works - your packets might take different routes depending on different conditions. Just like how you might take a different road home if you know that there’s roadworks or if the schools are on holiday, it can be genuinely much faster for your packets to take a diversion to avoid, say, a router that’s having a bad day.

        Routing protocols are very advanced and capable, taking many metrics into consideration for how traffic is routed. Under ideal conditions, yes, they’d take the physically shortest route possible, but in most cases, because electricity moves so fast, it’s better to take a route that’s hundreds of miles longer to avoid some router that got hacked and is currently participating in some DDoS attack.

        • That’s generally not how routing works

          It is how it works … mostly because what they’re talking about is the fact that the Internet (at least in the US) is not really set up like a mesh at the ISP level. It’s somewhere between "mesh " and “hub and spoke” where lots of parties that could talk directly to each other don’t (because nobody ever put down the lines and setup the routing equipment to connect two smaller ISPs or customers directly).

          https://www.smithsonianmag.com/smart-news/first-detailed-public-map-us-internet-infrastructure-180956701/

          • There’s absolutely nothing wrong with that topology - the fact that you seem to think that the design is a bad thing really demonstrates your lack of understanding here.

            For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

            I don’t want to get into the specifics, but in general, the more networks a router is connected to, the less efficient it is overall.

            The propagation delay is pretty insignificant for most routers. Carrier grade routers like those at the core of the internet can handle up to 43 billion packets per second, another hop is absolutely nothing in terms of delay.

            • For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

              Well daisy chaining would be outright insanity … I’m not even sure why you’d jump to something that insane … my internet connection doesn’t need to depend on the guy down the street.

              Making an optimally dense mesh network (and to be clear, I mean a partially connected mesh topology with more density than the current situation … which at a high level is already a partially connected mesh topology) would not be optimally cost effective … that’s it.

              the more networks a router is connected to, the less efficient it is overall. another hop is absolutely nothing in terms of delay.

              Do you not see how these are contradictory statements?

              Yeah, you’d need more routers, you have more lines. But you could route more directly between various points. e.g., there could be at least one major transmission line between each state and its adjacent states to minimize the distance a packet has to physically travel and increase redundancy. It’s just more expensive and there’s typically not a need.

              This stuff happens in more population dense areas because there’s more data, and more people, direct connections make more sense. It’s just money, it’s not that somehow not having fewer lines through the great plains makes the internet faster… Your argument and your attitude is something else. I suspect we’re just talking past each other, but w/e.

                • I wear a lot of hats professionally; mostly programming. I don’t do networking on a day-to-day basis though if that’s what you’re asking.

                  If you’ve got something actually substantive to back up your claim that (if money was no object) the current topology is totally optimal for traffic from an arbitrary point A <-> B on that map though… have at it.

                  This all started with:

                  you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly

                  And that’s absolutely true … depending on your location, you will travel an unnecessary distance to get to your destination … because there just aren’t wires connecting A <-> B. Just like a GPS will take you on a non-direct path to your destination because there’s not a road directly to it.

                  A very simple example where the current topology results in routing all the way out to Seattle only to backtrack: https://geotraceroute.com/?node=0&amp;host=umt.edu#

  • I played on Google Stadia from day 1 until it got shut down. I mainly played racing games like F1 and GRID, with the occasional session in RDR2 or The Division 2. Latency was never a problem for me.

    The main problem that occured over and over in the community was people’s slow or broken internet connection at home or their WiFi setup.

    I would say the technology for cloud gaming is here today, but the home internet connections of a lot of people aren’t ready yet.

      • WiFi is, and probably always will be, a fraction of the performance of an ethernet connection

        In terms of bandwidth, sure, but not in terms of latency, in fact, theoretically, WiFi could be faster than Ethernet. WiFi uses radio waves, which travel faster in air than electrons do in copper and photons do in glass.

        The limitation for WiFi is really at the physical layer - i.e. encoding/decoding. With that said, we do already have WiFi with transcoding fast enough to give sufficient performance for fast-paced gaming. While you’re totally correct that, at the moment, Ethernet is more capable in terms of bandwidth and latency, that’s not necessarily going to be true forever, and WiFi is good enough for any purpose at home use. The biggest issues are interference and attenuation - e.g. thick walls, sources of electromagnetic interference

        • Sure, good points. Even with in-home fiber (very unusual), latency of the medium is so equivalent as to be practically unmeasureable. I think, however, that the bigger factor is that it’s cheaper and easier to get a fast ethernet switch than a fast WiFi router; most WiFi routers don’t have particularly fast CPUs, or high-performance buses.

          Honestly, though, I’m just guessing; I doubt any of this has as much of a latency impact as WAN factors. Bandwidth is where you’ll notice WiFi affects, and this can present as latency issues as systems struggle to get updates over a (relatively) narrow pipe.

          • Thanks for the response, it’s nice to chat with you :)

            latency of the medium is so equivalent as to be practically unmeasureable

            More or less, yup. There are some cool uses of RF to achieve very high bandwidth, low latency connections (5G as a common example, but Wi-Fi 7 has a theoretical maximum speed of 46Gbps - while this is still far behind the maximum speed of Ethernet (We have 400Gbps Ethernet in use, with 800Gbps in development), it’s catching up very fast - and since most households and businesses with copper cabling will be using mostly CAT5e or 6a Ethernet (1Gbps/100m and 10Gbps/100m respectively), Wi-Fi will soon likely be faster than most copper Ethernet networks. It’s also very likely that 5G internet will all but supplant ADSL and VDSL connections in the coming years. I think twisted-pair copper cabling is following in the footsteps of coax :)

            Even with in-home fiber

            The minimum latency of a connection through fiber is about the same (actually, slightly less, but not enough to matter) than the same connection made through copper. Signal propagation speed is not a benefit of fiber over copper - the benefits of fiber are that you can have many, many more connections in the same diameter of cable than with copper, it’s immune to electromagnetic interference, and it can run much further distances without needing signal boosting.

            most WiFi routers don’t have particularly fast CPUs, or high-performance buses.

            That’s one of the main issues, yeah - consumer grade electronics are usually total junk, especially the free routers provided by ISPs, but I’m also thinking of those absolutely horrible “gaming” Wi-Fi routers provided by the likes of ASUS - they have decent specs, but they’re just absolutely overloaded with features that gobble RAM and CPU. Dear consumer electronics manufacturers, please just let the router be a router, and let the Wi-Fi APs be Wi-Fi APs. Combine the router and the Wi-Fi AP if you must, but absolutely please stop suggesting that people can run a hundred services from routers. You should totally upsell that feature in a separate node appliance or something! Sorry, I got distracted.

            it’s cheaper and easier to get a fast ethernet switch than a fast WiFi router

            I agree, but I also don’t - most consumers don’t really know what a switch is or why they might need one. Most switches found in houses are either integrated with a router, power line adapter, or Wi-Fi access point. While a good switch is absolutely going to be much cheaper than a good Wi-Fi AP, most people wouldn’t really look to buy one. They might search for “Ethernet hub” on Amazon and luck into buying a decent switch, but I think most people think in terms of Wi-Fi these days, so it’s probably easier to get a Wi-Fi AP than a switch.

            Also, just a minor nitpick: “fast Ethernet” is a little confusing, as terminology, because that’s the marketing name used to refer to 100mbps Ethernet connections (often indicated on network devices as FE) - so named because it was the successor to 10mbps (regular) Ethernet. (damn you, marketing people! I blame y’all for what you did to USB) When we discuss this kind of thing, it’s clearer to refer to ‘high speed Ethernet’ or refer specifically to line speed (e.g. 10GbE) - unless we’re talking about 100mbps Ethernet! Although, even then, it’s probably a bit confusing these days - I’d call it 10/100 Ethernet usually, rather than fast Ethernet, unless I was being really lazy (“yeah just stick it in the f/e port”)

            I doubt any of this has as much of a latency impact as WAN factors

            It definitely can do, but in a properly functioning network, I’d agree. If you have a faulty connection or significant source of interference or impedance, then that would be much more of an issue than anything else - otherwise, yeah, it’s going to be the Internet where most of the latency comes in to play. I would estimate that probably 75% of people could get big improvements to their online experience by making changes to their home network, but at a certain point, yes, contention becomes the bottleneck, which is not so easily solved :)

        • Interference is a big issue for Wi-Fi as well.

          You may be able to get the latency and the throughput, but if you’re dropping packets because of some noise in the air, that’s not good for gaming.

          I also used stadia and have a different setup now… neither one worked very well over WiFi despite some pretty high end networking. I’d still get the occasional blip where everything would get super blurry because … 🤷‍♂️

          Part of that I think is the Wi-Fi chipset in my computer misbehaving, but I could never reproduce that in testing, just in practice I’d run into an issue for a few seconds everytime … which doesn’t seem like much until you lose a game or you’re about to beat some important challenge and then mAlFunCTion.

          • Yep, I mean, the comment you’re replying to literally contains the phrase, “the biggest issues are interference…” haha

            Likewise, it’s something that’s likely to improve as we tend to move away from the 2.4GHz band.

            Dropping packets is definitely more of a problem for streaming in particular, rather than anything else, because like you said, if you drop packets you’re going to get degraded quality video. If you were gaming locally, it wouldn’t really affect you as much. Online games have extremely good, well designed methods of compensating for dropped packets in a way that streaming will never be able to match.

            • Yep, I mean, the comment you’re replying to literally contains the phrase, “the biggest issues are interference…” haha

              Oops, yup, read that one wrong.

              Likewise, it’s something that’s likely to improve as we tend to move away from the 2.4GHz band.

              I’m not so sure. We’ve been on 5GHz for a while … even on there or as recently as WiFi 6 (which I forgot the exact band), there are still lots of problems.

              Dropping packets is definitely more of a problem for streaming in particular, rather than anything else, because like you said, if you drop packets you’re going to get degraded quality video. If you were gaming locally, it wouldn’t really affect you as much. Online games have extremely good, well designed methods of compensating for dropped packets in a way that streaming will never be able to match.

              Yes and no; dropping packets can still really badly impact competitive games. Casual games that use client authoritatively movement there for sure aren’t issues with though.

    • I would say the technology for cloud gaming is here today, but the home internet connections of a lot of people aren’t ready yet.

      You witness this a lot with video conferencing. People tell one person their audio/video is shitty, and that person just shrugs and says “yeah, I have bad internet.” In my head I’m screaming “Well, what have you tried?!” or “I see you sitting beside the refrigerator there!”

      • Yeah… or microphones… I really wish they’d start putting the noise cancelling as an option on the receiving end… lots of people don’t care to set up their audio right and then you get god awful static, crunching, or breathing in your ears.

        It’s especially prevalent in gaming where headset mics dominate. 🙃

    • Those games are quite well matched with cloud streaming. An example of a game which isn’t suitable for cloud gaming would be competitive FPS games such as rainbow 6 siege, where the additional delay imposed by connection between the player and the game can be quite a significant disadvantage. The only way that this would be low enough to become acceptable would be if you live close enough to the host device that the latency is very low, or or the host device is very close to the game server itself.

      • I had Stadia too and played a lot of Destiny 2. I must say that I was highly impressed by the low latency. I literally couldn’t notice that I wasn’t playing locally, unless my internet went down.

        Only when I took Stadia with me to a random airbnb did I start noticing any type of latency, and then we just played Mortal Kombat or other fighting games where you can just mash the buttons.

  • The speed of light, so 50ms or so assuming locations on Earth. In practice a bit more because you have to go around it rather than through the core. Servers already have to make retroactive calls, which is why it looks like you hit but then you didn’t sometimes.

    Interestingly enough, Starlink has lower latency than wire despite the longer path because light travels slower than c through glass fiber.

  •  Nighed   ( @Nighed@sffa.community ) 
    link
    fedilink
    English
    5
    edit-2
    9 months ago

    The base limit is the speed of light/electricity it takes X time for a signal to travel. This is your base latency. For example it takes about 70ms for light to travel half way round the world (it has to go round, not through). This can be improved by talking to servers that are closer to you and by taking links that are direct. But can’t be improved beyond the rules of physics.

    On top of this you get really small amounts of processing delays as data is passed through various routers/computers on the way to the destination.

    The real problem comes from congestion - if there is a lot of data being transferred between two destinations, the infrastructure between them might not be able to cope. This may result in messages being queued (causing a delay) or dropped (your controls don’t make it to the server!) To avoid this, the network will route your message via somewhere else with less demand, increasing the distance and delay (but spreading the load)

    Unfortunately, if that overloaded cable is the one bringing data into your neighborhood, then there likely isn’t an alternative route. In the UK at least, we are (finally) building out a fiver to the premises internet network that effectively fixes any local bottlenecks.

    If you want to see where your latency is coming from, you can run a trace route using various applications (or even directly in windows). This will show you the latency between each router that your data is traveling through on its route to it’s destination.

    Edit addition: for game streaming the network delays are added onto the natural delays of running the game (controls -> computer -> processing -> display/speakers).

    The other big additional delay for streaming is that in order to reduce the network load of streaming the game the image is compressed and encoded to be sent to you (much more than is done for your monitor cable).

    This is a computationaly intensive operation that can take a good few ms. The better the computers at either end, the faster this can be done. However the big way forward here is hardware encoding/decoding. By using hardware that is made to just do encoding/decoding and nothing else this can be done much faster.

    These encoders are commonly on graphics cards, and the graphics parts of CPUs. As newer encoding formats are created and hardware encoders created (and actually included) this area will becomeuch faster.

    Source: programmer with a computer science degree and a vague interest in networking.

    On mobile, so sorry for bad editing.

  • The lag has several components. Input lag between the peripherals and your computer, the network transmissions to the server, the regular rendering of the game, live transcoding the game, the network again, decoding the stream on your device. The rest are pretty much insignificant.

    The biggest way to reduce lag I can think of is if the server is literally in your city, and the connection between it and you have the least amount of nodes between you and the server. Some video streaming services will partner with ISPs to put their servers in the same place to reduce overhead and improve the user experience. I’d assume that gaming would benefit from that too, but this is harder to implement since.

    Another way to improve networking lag is by prioritising game streaming data over other data, QoS (quality of service), is really important both for the home network and on the ISP side.

    This should be obvious, but don’t use a VPN.

    For the video transcoding, it can be pretty quick, but having dedicated hardware like NVENC would be faster than using the CPU, not just in terms of FPS, but also in latency if given the same FPS (through FPS cap).

    Higher FPS. The more frames per second, the lower the input lag, though it only matters if you eliminate network lag first.

    I should mention that I have never used any game streaming service, and I don’t have the equipment to test lag either.

  •  Platform27   ( @Platform27@lemmy.ml ) 
    link
    fedilink
    English
    3
    edit-2
    9 months ago

    I think we are constantly progressing in that field. One issue for latency was that controllers used to contact your device, and then the server. Now they can connect directly to the server. Things will improve, like it or not.

    For right now, I think the biggest hurdle is with ISPs.

    1. Data caps can be quite common, in many countries. Essentially creating a huge limit on how much you can (if at all) play.
    2. Most people’s router, and access point hardware needs upgrading. A lot of the stock router AIOs from ISPs are really bad. Creating a bottleneck before the data even reaches the servers.

    Another hurdle I can see is companies profit sharing. Everyone wants a large cut, so I’d expect multiple streaming options… and many failures, like what we’re seeing on the movies/series streaming model… just with games it’ll be soooo much worse.

  •  Tak   ( @Tak@lemmy.ml ) 
    link
    fedilink
    29 months ago

    I feel a lot of the responses here are talking about cloud gaming not game streaming.

    Game streaming needs to be easier to do for it to become more popular. There’s a bunch of half baked solutions through different hardware and software when you could just physically move the hardware running the game in most cases.

    Cloud gaming is a hard sell when the cost to play most games on your own hardware is really fucking cheap compared to most media. Like the QWERTY keyboard people will do the traditional thing because they aren’t forced to change and it’s good enough.

      •  Tak   ( @Tak@lemmy.ml ) 
        link
        fedilink
        19 months ago

        You’re right and I guess I’m trying to say that it’s not as simple as: Turn on console/computer then launch game

        Plus we’re not discussing the intricacies of the game not being on steam or consoles. (we could argue it’s easier in some ways too with the steam deck)

        • I hear ya. It seems like a large portion of what currently stops game streaming is the internet part of the equation. Even in home streaming doesn’t like to be passed through a router and gets slowed down by that somewhat.

  • The best thing we can do for latency is get all the gamers into a big sphere configuration in outer space. It could be like 100 miles across and house everybody, leading to ridiculously low latency. Then have all the agricultural bots and whatnot handle all the stuff on the periphery like food and gadgets and stuff.