-->
Save your FREE seat for Streaming Media Connect in November. Register Now!

A Buyer’s Guide to Backhaul

Article Featured Image

The term “backhaul” broadly refers to telecom links that are used for bringing digital signals back from the edge of a network to services at the center or core of the network topology. They are the spokes in a hub-and-spoke network topology.

For example, in mobile networks backhaul describes the fixed terrestrial telecommunications networks (typically fiber) that link the masts and towers back to the central exchanges and switching.

In broadcast, backhaul more broadly describes any type of telecom links that bring the (usually, but not always, unedited) video signals created in the field back to the central point of the network where they may be combined with other signals to create the output programming. While the models vary, typically this central location will be a TV studio or a transmission playout that then aggregates a number of such signals and perhaps combines them with signals generated in the studio from file servers or local studios to create the overall audience-facing programming output. In broadcast, the tradition has been to have complete, end-to-end quality of service (QoS) control. For this reason, the variables are kept to a minimum, so the backhaul link is usually a single Layer 2 data link and is operated as a private circuit.

In the newer streaming and webcasting models of particular relevance to readers of this magazine, backhaul almost universally refers to the connection from the media encoder back to an origin media server from where, in turn, the content delivery network (which gets the signal to the audience) sources its primary signal. Since this server may be located in an internet hosting center, the telecoms in the field may only logically connect to the server in Layer 3 (IP layer), while in practice there may be many underlying Layer 2 links of different types. While they all will use IP (and so be transparent to an IP-based application), the Layer 2 links may all vary significantly in their performance and design characteristics, and they will typically not all be exclusively used for the transmission. This presents specific challenges when trying to compare the proposition to change from the QoS guarantees that come with the carefully controlled systems of the legacy traditional broadcast models.

This QoS issue is often cited by those resistant to change from traditional broadcast to IP broadcast, who claim that the proper way to do things is to use traditional QoS guaranteed links, but the fact is that in most scenarios, IP’s commoditization can provide a nearly comparable (if not equal or sometimes better) QoS service-level agreement for a fraction of the price of a traditional broadcast link.

Before we look at the link layer options common today, I also want to highlight that the terms “backhaul” and “contribution feed” can be used interchangeably, although the word choice typically reveals who is using the expression. To a studio producer or channel producer -- or even a content delivery network (CDN) operator -- the term “backhaul” is typically a network operator-centric reference. The streaming webcaster or broadcast engineer in the field would talk about the very same telecom link as a “contribution feed.” Ultimately it depends on which direction you look through the pipe. The data flow is from the field to the playout, and so it becomes immediately apparent why backhaul “hauls” the signal “back” to the studio producer, while field engineers “contribute” the video away from themselves and into the playout.

Whichever term you use, the function here is the same: to get live or prerecorded video back to the studio.

In an ideal world, a backhaul/contribution feed will always be of higher “goodput” (the useful throughput of the network link) than the encoded bitrate of the video -- and this will allow the video to be transferred in real time. In other words, a 30-minute video encoded at 2Mbps will take 30 minutes (or less) to transfer over a 2Mbps or greater link. If the available bitrate of the backhaul link is lower than the bitrate of the encoded video, then any attempt to use the link for a live contribution feed will result in a broken signal and a poor quality experience for all end users.

In the streaming world, the links will also aim to be of low latency. Many people misunderstand why low latency is important, thinking that low latency is just about startup time, or the delay between an event occurring and it being visible at the remote end of the network.

While this is to an extent correct, the bigger issue with latency arises when trying to use TCP/IP-based data transfer in your IP video stream. It’s universally the case for those using Flash Media Live Encoder-based RTMP protocols, and it’s usually the case for those encoding to any adaptive bitrate protocols such as HTTP Live Streaming (HLS), Smooth Streaming, and HTTP Dynamic Streaming (HDS). An incorrectly configured TCP window size, coupled with the odd lost packet or error in transmission on a high-latency link, can produce significant network retransmissions since the entire window -- not just a packet or two of video -- can be dumped. This will make the link ineffective, even if the theoretical bitrate on that link is much higher than your encoded video stream.

There are many reliable UDP models that are both open source (RUDP) and proprietary (such as Aspera, Motama, and Zixi) that can help to optimize the throughput of data transfers that normally use TCP and effectively manage errors in the transfer in a way that does not destructively dump the entire transfer window. While many of these make extraordinary claims (one of these companies claims to reduce the transfer time of a certain file from 3.5 hours to 5 minutes, which is simply not realistic under any circumstances), proper alternatives to the default TCP that are implemented with UDP can provide improvements in small magnitudes.

The other complexity with IP is that, at some point in an IP backhaul link, some part of the infrastructure is often shared with other network applications that contend for that leg of the link’s available throughput. As an IP backhaul link gets longer, typically the risk of adverse contention increases (in terms of bandwidth throughput requirement). Peering between the origin server’s host network and the backhaul networks represents a significant risk, so this stage of the network topology should be well understood when commissioning the service. If it is not done properly, QoS issues can lead to a lot of finger pointing by each service provider, leaving the producer unable to properly resolve the issue or, worse, apportion responsibility for outages.

So where should you start when commissioning a backhaul link?

In descending order of QoS, and therefore the priority that you should give them, the simple list shown in Table 1 should serve to help you work out which available link should be used in favor of any other if it is available and can provide sufficient capacity and suitable latency.

Pricing indications are always going to vary massively, and these are really given to indicate an order of magnitude. Those listed are based on my own experience in provisioning these links over the past 17 years; some types I have not set up for more than half of that time, so please remember that they are ballpark figures. All the prices will vary with volume commitment too. Here I am showing what you would need to pay for your first hour at a new location.

This table likely will stir up a storm of feedback from readers commenting that they have got the services for different prices and that I have missed X or Y model. There will even be those out there who regularly provide commercial broadcasts to national TV channels using public Wi-Fi and who feel that I am being too hard on such options!

To be honest, I hope that’s the case. What we would like to achieve is a wealth of improved data fed into us at the magazine via the comments under the online publication of this article, since that will certainly help anyone making a buying decision or planning his own backhaul. So if you are reading this online and think I have missed something or got something out of date -- or plain and simply wrong -- then please add your input to the comments.

And if you are reading this in print, then I suggest you do check in on the website to see the latest input from other readers and perhaps some top-ups from myself, as well as share some other ideas you might have.

We will try to iterate this article regularly, since backhaul is central to webcasting -- particularly live streaming -- and it is an area Streaming Media plans to focus on in more depth.

This article appears in the 2014 Streaming Media Sourcebook.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Video for the Long Haul: Exploring Backhaul Options

When public IP just isn't reliable enough, it's time to turn to backhaul options such as cellmux, satellite, or fiber to get your video signal where it needs to go.

Companies and Suppliers Mentioned