Behind the scenes of live streaming the FIFA World Cup 2018

Screen Shot 2018-07-13 at 5.01.13 PM

Globo.com, the digital branch of Globo Group, had the rights to do the online live streaming of the FIFA World Cup 2018  for the entire Brazilian national territory.

We already did this in the past and I think that sharing the experience may be useful for the curious minds that want to learn more about the digital live streaming ecosystem as well as for the people interested in how Brazil infrastructure and user’s demand behave in an event with this scale.

Before the event – Road to the world cup

In average, we usually ingest and process about 1TB of video and users fetches around 1PB every single day. Even before the World Cup started, the live stream of a single soccer match had a peak of more than 500K simultaneous users with more than 400k requests per second.

When comparing these numbers to previous events such as the Olympic Games or the FIFA World Cup 2014 we can see an exponential evolution in demand.

Screen Shot 2018-07-13 at 5.00.44 PM

Back in 2014, Globo.com CDN was equipped with 20Gbps network interfaces. Now, the nodes were upgraded with 40Gbs, 50Gbs, and 100Gbps NICs. Processors were also upgraded enabling us to deliver 84Gbps on a single machine as part of the preparation for the World Cup.

I’m glad to say that the Linux/kernel fine-tune required was minimal since the newer kernel versions are very well tuned by default.

Screen Shot 2018-07-13 at 5.00.59 PM

We broke the simultaneous users record set by 2014 FIFA world cup way before the first 2018 World Cup matches. We also noticed an increase in the overall bitrate which likely point that the Internet infrastructure in Brazil improved significally in the past four years.

Plataform overview – The strategy 1:1:1

Let’s not focus on the workflow before the video arrives at our ingest encoders. Just think that it’s coming from Russia’s stadiums and reaching our ingest encoders directly. With this simplification in place, we can assume that there are basically two different users of interacting with the video platform: the ones producing the video and others consuming in the other end.

Screen Shot 2018-07-07 at 4.36.18 PM

Consumers of the video are the visitors of our internet properties and they watch the live content throughout Globo.com video player, which is responsible for requesting video content to Globo.com’s CDN or one of our CDN partners.

Globo.com player is based on Clappr, an open source HTML5 player that uses hls.js and shaka as its core playback engines.

Globo.com CDN nodes are mostly built on top of  OSS projects such as Linux, Nginx (nginx-lua), Lua Programming Language and redis. Our origin is made of multiple ingest points and a mix of solutions such as FFmpeg, Elemental and  OBS. A Cassandra cluster is also deployed with the responsibility of storing and manipulating video segments.

OSS projects play a key role in all the initiatives we have within our technology and engineering teams. We also rely a lot on dozens of open source libraries and we try as much as we can to give stuff back to the community.

If you want to know how this architecture works you can learn from the awesome post: Globo.com’s live video platform for the 2014 FIFA World Cup

Constrained by bandwidth – Control the ball

The truth is: the Internet is physically limited, it doesn’t matter if you got more servers, in the end, if a group of users have a link to us of 10Gb/s that’s all we stream to them.

Or we can explore external CDNs more pops but I hope you got the idea! 🙂

In a big event, such as the World Cup, there will be some congestions on the link between our CDN and the final users, how we tackle this problem (of a limited bandwidth) can be divided into three levels:

  1. OS :: TCP congestion control – the lowest level to control the connection, when it’s saturated, this control is applied to each user.
  2. Player :: ABR algorithm – it watches metrics such as network speed, CPU load, frame drop among others to decide whether it should adapt to a better or the worst bitrate quality.
  3. Server :: group bitrate control – when we identify that a group of users, which uses the same link, are using a link that is about to saturate, we can try to help the player to use to a lower bitrate and accommodate more users.

During the event – Goals

Even before the knockout stage, we were able to beat all of our previous records, serving about 1.2M simultaneous users during this match. Our live CDN delivered, at its peak, about 700K requests/s and our worst response time was half a second for a 4 seconds video segment.

Some of our servers were able to reach (peak) 37Gb/s in bandwidth. We also delivered the 4K live streaming using HEVC with a delay of around 25 seconds.

We are constantly evolving the platform and looking at the bleeding edge technologies such as AV1. With the help of the open source community and the growing amount of talents on our technology teams, we hope to keep beating records and delivering the best experience to our users.

References

How to measure video quality perception

Update 3 (05/16/2020): Wrote an updated guide to use VMAF through FFmpeg.

Update 2 (01/06/2016): Fixed reference video bitrate unit from Kbps to KBps

Update 1 (10/16/2016): Anne Aaron presented the VMAF at the Demuxed 2016.

When working with videos, you should be focusing all your efforts on best quality of streaming, less bandwidth usage, and low latency in order to deliver the best experience for the users.

This is not an easy task. You often need to test different bitrates, encoder parameters, fine tune your CDN and even try new codecs. You usually run a process of testing a combination of configurations and codecs and check the final renditions with your naked eyes. This process doesn’t scale, can’t we just trust computers to check that?

bit rate (bitrate): is a measure often used in digital video, usually it is assumed the rate of bits per seconds, it is one of the many terms used in video streaming.

screen-shot-2016-10-08-at-9-30-26-am
same resolution, different bitrates.

codec: is an electronic circuit or software that compresses or decompresses digital content. (ex: H264 (AVC), VP9, AAC (HE-AAC), AV1 and etc)

We were about to start a new hack day session here at Globo.com and since some of us learned how to measure the noise introduced when encoding and compressing images, we thought we could play with the stuff we learned by applying the methods to measure video quality.

We started by using the PSNR (peak signal-to-noise ratio) algorithm which can be defined in terms of the mean squared error (MSE) in decibel scale.

PSNR: is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise.

First, you calculate the MSE which is the average of the squares of the errors and then you normalize it to decibels.


MSE = ∑ ∑ ( [n1[i]-n2[i]] ) ^ 2 / m * n
*n1 is the original image, n2 the comparable image, m and n are the image size
PSNR = 10 log₁₀ ( MAX ^ 2 / MSE )
*MAX is the maximum possible pixel value of the image

view raw

math.math

hosted with ❤ by GitHub

For 3D signals (colored image), your MSE needs to sum all the means for each plane (ie: RGB, YUV and etc) and then divide by 3 (or 3 * MAX ^ 2).

To validate our idea, we downloaded videos (720p, h264) with the bitrate of 3400 kbps from distinct groups like News, Soap Opera and Sports. We called this group of videos the pivots or reference videos. After that, we generated some transrated versions of them with lower bitrates. We created 700 kbps, 900 kbps, 1300 kbps, 1900 kbps and 2800 kbps renditions for each reference video.

Heads Up! Typically the pivot video (most commonly referred to as reference video), uses a truly lossless compression, the bitrate for a YUV420p raw video should be 1280x720x1.5(given the YUV420 format)x24fps /1000 = 33177.6KBps, far more than what we used as reference (3400KBps).

We extracted 25 images for each video and calculate the PSNR comparing the pivot image with the modified ones. Finally, we calculate the mean. Just to help you understand the numbers below, a higher PSNR means that the image is more similar to the pivot.

700 kbps 900 kbps 1300 kbps 1900 kbps 2800 kbps 3400 kbps
Soap Op. 35.0124 36.5159 38.6041 40.3441 41.9447
News 28.6414 30.0076 32.6577 35.1601 37.0301
Sports 32.5675 34.5158 37.2104 39.4079 41.4540
screen-shot-2016-10-08-at-9-15-24-am
A visual sample.

We defined a PSNR of 38 (from our observations) as the ideal but then we noticed that the News group didn’t meet the goal. When we plotted the News data in the graph we could see what happened.

The issue with the video from the News group is that they’re a combination of different sources: External traffic camera with poor resolution, talking heads in a studio camera with good resolution and quality, some scenes with computer graphics (like the weather report) and others. We suspected that the News average was affected by those outliers but this kind of video is part of our reality.

kitbcrnx2uuu4
The different video sources are visible in clusters. (PSNR(frames))

We needed a better way to measure the quality perception so we searched for alternatives and we reached one of the Netflix’s posts: an approach toward a practical perceptual video quality metric (VMAF). At first, we learned that PSNR does not consistently reflect human perception and that Netflix is creating ways to approach this with the VMAF model.

They created a dataset with several videos including videos that are not part of the Netflix library and put real people to grade it. They called this score of DMOS. Now they could compare how each algorithm scores against DMOS.

netflix
FastSSIM, PSNRHVS, PSNR and SSIM (y) vs DMOS (x)

They realized that none of them were perfect even though they have some strength in certain situations. They adopted a machine-learning based model to design a metric that seeks to reflect human perception of video quality (a Support Vector Machine (SVM) regressor).

The Netflix approach is much wider than using PSNR alone. They take into account more features like motion, different resolutions and screens and they even allow you train the model with your own video dataset.

“We developed Video Multimethod Assessment Fusion, or VMAF, that predicts subjective quality by combining multiple elementary quality metrics. The basic rationale is that each elementary metric may have its own strengths and weaknesses with respect to the source content characteristics, type of artifacts, and degree of distortion. By ‘fusing’ elementary metrics into a final metric using a machine-learning algorithm – in our case, a Support Vector Machine (SVM) regressor”

Netflix about VMAF

The best news (pun intended) is that the VMAF is FOSS by Netflix and you can use it now. The following commands can be executed in the terminal. Basically, with Docker installed, it installs the VMAF, downloads a video, transcodes it (using docker image of FFmpeg) to generate a comparable video and finally checks the VMAF score.


# clone the project (later they'll push a docker image to dockerhub)
git clone –depth 1 https://github.com/Netflix/vmaf.git vmaf
cd vmaf
# build the image
docker build -t vmaf .
# get the pivot video (reference video)
wget http://www.sample-videos.com/video/mp4/360/big_buck_bunny_360p_5mb.mp4
# generate a new transcoded video (vp9, vcodec:500kbps)
docker run –rm -v $(PWD):/files jrottenberg/ffmpeg -i /files/big_buck_bunny_360p_5mb.mp4 -c:v libvpx-vp9 -b:v 500K -c:a libvorbis /files/big_buck_bunny_360p.webm
# extract the yuv (yuv420p) color space from them
docker run –rm -v $(PWD):/files jrottenberg/ffmpeg -i /files/big_buck_bunny_360p_5mb.mp4 -c:v rawvideo -pix_fmt yuv420p /files/360p_mpeg4-v_1000.yuv
docker run –rm -v $(PWD):/files jrottenberg/ffmpeg -i /files/big_buck_bunny_360p.webm -c:v rawvideo -pix_fmt yuv420p /files/360p_vp9_700.yuv
# checks VMAF score
docker run –rm -v $(PWD):/files vmaf run_vmaf yuv420p 640 368 /files/360p_mpeg4-v_1000.yuv /files/360p_vp9_700.yuv –out-fmt json
# and you can even check VMAF score using existent trained model
docker run –rm -v $(PWD):/files vmaf run_vmaf yuv420p 640 368 /files/360p_mpeg4-v_1000.yuv /files/360p_vp9_700.yuv –out-fmt json –model /files/resource/model/nflxall_vmafv4.pkl

view raw

using_vmaf.sh

hosted with ❤ by GitHub

You saved around 1.89 MB (37%) and still got the VMAF score 94.


{
"aggregate": {
"VMAF_feature_adm2_score": 0.9865012294519826,
"VMAF_feature_motion_score": 2.6486005151515153,
"VMAF_feature_vif_scale0_score": 0.85336751265595612,
"VMAF_feature_vif_scale1_score": 0.97274233143291644,
"VMAF_feature_vif_scale2_score": 0.98624814558455487,
"VMAF_feature_vif_scale3_score": 0.99218556024841664,
"VMAF_score": 94.143067486687571,
"method": "mean"
}
}

Using a composed solution like VMAF or VQM-VFD proved to be better than using a single metric, there are still issues to be solved but I think it’s reasonable to use such algorithms plus A/B tests given the impractical scenario of hiring people to check video impairments.

A/B tests: For instance, you could use X% of your user base for Y days offering them the newest changes and see how much they would reject it.

Functor, Pointed Functor, Monad and Applicative Functor in JS

function-machine


// This post will briefly explain (omiting, skipping some parts) in code what is
// Functor, Pointed Functor, Monad and Applicative Functor. Maybe by reading the
// code you will easily grasp these functional concepts.
// if you only want to run this code go to:
// https://jsfiddle.net/leandromoreira/buq5mnyk/
// or https://gist.github.com/leandromoreira/9504733c7f8c6361c46270ea953d8409
// This code requires you to have require.js loaded (or you can load ramda instead :P)
requirejs.config({
paths: {
ramda: 'https://cdnjs.cloudflare.com/ajax/libs/ramda/0.13.0/ramda.min'
},
});
require(['ramda'], function(_) {
// First let's create a Container that is a type that holds (wraps) a value, a useful abstraction to handle state.
var Container = function(x) {
this.__value = x;
}
// of is a method to create Container of x type
Container.of = function(x) {
return new Container(x);
};
console.log("should be 3", Container.of(3))
// We can improve this building block (Container) by providing a way to handle the wrapped value,
// this is basically a Functor, which is a type that implements map (it is mappable) and obeys some laws.
// By the way a Pointed Functor is a functor with an of method.
Container.prototype.map = function(f) {
return Container.of(f(this.__value));
}
var c4 = Container.of(4)
var inc = function(x) {
return x + 1
}
var c5 = c4.map(inc)
// We first created a container of 4 then we map a increase over it resulting in a container of 5
console.log("should be 5", c5)
// Maybe is a functor that checks if the value is null/undefined
// it is useful to avoid erros like "Cannot read property x of null"
Container.prototype.isNothing = function() {
return (this.__value === null || this.__value === undefined);
};
// Now our map will also check weather it's valid or not.
Container.prototype.map = function(f) {
return this.isNothing() ? Container.of(null) : Container.of(f(this.__value));
};
var address = function(person) {
return person.address;
};
var upperCase = function(t) {
return t.toUpperCase()
}
// Although we're passing an invalid value to the container it won't broke
console.log("should be null without errors", Container.of(null).map(address).map(upperCase))
// but when we do pass the right parameter it produces the expected output
console.log("should be HERE", Container.of({
name: "Diddy",
address: "here"
}).map(address).map(upperCase))
// this is good but a failing error with no message can make things worst 😦
// This functions maps any function a functor
var map = _.curry(function(ordinaryFn, functor) {
return functor.map(ordinaryFn);
});
var aFunctor = Container.of(2)
var sum6 = function(x) {
return x + 6
}
// given an ordinary function and an functor it produces another functor
var plus6 = map(sum6)
var y = plus6(aFunctor)
console.log("should be a Functor of 8", y)
// Either is a functor that can return two types either Right (normal flow) or Left (some error occorred).
// Now here what is great is that we can say what was the error.
var Left = function(x) {
this.__value = x;
};
Left.of = function(x) {
return new Left(x);
};
Left.prototype.map = function(f) {
return this;
};
var Right = function(x) {
this.__value = x;
};
Right.of = function(x) {
return new Right(x);
};
Right.prototype.map = function(f) {
return Right.of(f(this.__value));
}
console.log("should be 10", Right.of(8).map(inc).map(inc))
console.log("should be unchaged 8", Left.of(8).map(inc).map(inc))
var nonNegative = function(x) {
if (x < 0) {
return Left.of("you must pass a positive number")
} else {
return Right.of(x)
}
}
console.log("should be 10", nonNegative(9).map(inc))
console.log("should be an error message", nonNegative(4).map(inc))
// IO is a functor that holds functions as values, and instead of mapping the value
// it'll map functions and compose them like a array of functions.
var IO = function(f) {
this.__value = f;
};
IO.of = function(x) {
return new IO(function() {
return x;
});
};
IO.prototype.map = function(f) {
return new IO(_.compose(f, this.__value));
};
var composedLazyFunctions = IO.of(3).map(inc).map(inc).map(inc)
console.log("this is a lazy composed function", composedLazyFunctions)
console.log("this is the execution of that composed function", composedLazyFunctions.__value())
var readFile = function(filename) {
return new IO(function() {
return "read file from " + filename
});
};
var print = function(x) {
return new IO(function() {
return x
});
};
// Cat will be a composed function that produces and IO of an IO :X
var cat = _.compose(map(print), readFile)
var catGit = cat('.git/config')
console.log("it should be an IO of IO IO(IO())", catGit)
// This creates an awkward situation where if we want the real value we need to
// catGit.__value().__value() how about create a join that unwraps the value.
IO.prototype.join = function() {
return this.__value()
};
console.log("should be 'read file from .git/config'", catGit.join().join())
// Notice that we still need to call join twice, and if we join every time we map?
// this is what we know was chain
var chain = _.curry(function(ordinaryFn, functor) {
return functor.map(ordinaryFn).join();
});
var complexSum = function(initialNumber) {
return new IO(function() {
var x = initialNumber * 4
var y = x * 4
return (y + 42) x * 4
});
};
var incIO = function(x) {
return new IO(function() {
return x + 1
});
};
var doubleIO = function(x) {
return new IO(function() {
return x * 2
});
};
var cleverMath = _.compose(
chain(doubleIO),
chain(incIO),
chain(incIO),
complexSum
);
var multiplier = Math.floor((Math.random() * 552) + 7)
var ordinaryValue = Math.floor((Math.random() * 98134123) 12)
var cleverMathResult = cleverMath(ordinaryValue * multiplier)
console.log("should be 88", cleverMathResult.join())
// Monads are pointed functors that can flatten 🙂
// Now let's finish with an Applicative Functor which is a pointed functor with an ap(ply) method
Container.prototype.ap = function(other_container) {
return other_container.map(this.__value)
}
console.log("should be Container(4)", Container.of(inc).ap(Container.of(3)))
})
// Please consider to read these links bellow
// http://www.leonardoborges.com/writings/2012/11/30/monads-in-small-bites-part-i-functors/
// https://drboolean.gitbooks.io/mostly-adequate-guide/content/ch8.html