An introduction to Encrypted Media Extensions (EME)

An introduction to Encrypted Media Extensions (EME)

SAM DUTTON: Welcome. Welcome to Google
Developers Live. Yeah, we’re going to be talking
today about Encrypted Media Extensions, which you
might also known as EME. My name is Sam Dutton. I’m a developer advocate
for Google Chrome based here in London. And I have with me today John
Luther, who is a product manager for EME, amongst
other things. JOHN LUTHER: Good afternoon,
good morning, good evening wherever you are. SAM DUTTON: Yeah, wherever
you are. I just wondered if you could
tell us a bit about your background. And I know you’re working
on a lot of other stuff as well as EME. JOHN LUTHER: Yep. So let’s see, my title
is product manager of Chrome Media. So right now, I manage all the
video technology in Chrome as well as the WebM Project, which
is an open media format people might know as VP8, the
video codec on Vorbis audio. So anyway, I manage
that stuff. But my background is primarily
in digital media and specifically in compression. I worked for a small
company called On2 Technologies from 2002. And then in 2010, Google
acquired On2 and open sourced the VP8 codec that we had. And that became the
WebM project. So now WebM is really the first
highest-quality free media codec available today. So we’re pretty proud of that. But anyway, yeah, so EME is part
of what we call the video stack in Chrome, sort of all
the innards that make media happen, and that’s my
responsibility. SAM DUTTON: We’ve got all this
video coming to the web. That’s incredible. How are we going to
cope with that? How’s the web going to cope
with all this video? JOHN LUTHER: Yeah, there was a
study done by Cisco called the Zettabyte era. I think that was last year. I think they update that
study every year. But their projection was I think
something like by the year 2016, some astronomical
percentage of all traffic on the internet will be video. It’s like 80% or something
like that. But anyway, yeah, it’s
becoming a big issue. And as we know from YouTube, I
don’t even know how many hours of video per second now come
into YouTube, but it’s a lot. So there’s sort of two things
that we’re working on. The first is adaptive streaming,
we think, at least the quality of service
of this. So today or shortly, a few
months ago, until we started deploying all this stuff, a
client getting a video would be sort of a single stream at
one data rate, meaning there would be so much data per second
to serve that video. And as we know, network
conditions vary a lot. If you ever look at a bandwidth
monitor on your own machine, your available
bandwidth just fluctuates all the time. So adaptive streaming, what we
can do is we can sort of figure out what your available
bandwidth is at any moment and serve you the right video stream
at a data rate that’s more appropriate for you. So as those conditions change,
the data rate will be going up and down. So anyway, this is starting
to be deployed on YouTube. And in Chrome, we have an
Extensions to HTML API that we’ve proposed called Media
Source Extensions, which enables developers to do
this in their web apps. The other thing that we’re
really psyched about is the VP9 video codec, which is
the successor to VP8. We finished defining that codec
in June, and now we’re optimizing it and
integrating it. It’s now in Chrome. It’s in the beta channel. And that, we’re seeing
results of– you’re familiar with the
H.264 video codec. Just sort of as a benchmark,
with VP9, we’re seeing quality about 50% better than the
highest-quality high-profile H.264, which is their
highest available. And then there’s now an emerging
standard called HEVC. We’re seeing VP9 slightly better
than HEVC, and we’re still going to keep working on
optimizing those things. So it’s pretty exciting. If you look at that earlier
thing I mentioned, the Cisco study, and you say, OK, well, we
have this exploding volume of video on the internet. If a codec like VP9 can shave
half that off, you just doubled the size of the
internet in a way. And it opens more capacity,
because it’s just going to keep growing and growing. Especially with high definition,
now people are talking about 4K video, which
is just mind boggling to me. So anyway, those are two
of the things that are going to help. But we’re always
looking into– we’re doing research just how
to make the user experience just better for online video. That’s also one of the missions
of our project. SAM DUTTON: So going back to
VP8, I know there’s been some work on hardware with VP8. Just wondering, tell us a little
bit about that, because I know you’ve been involved
with some of that. JOHN LUTHER: Yeah, one of the
virtues that I think I have from working in this business
so long with video compression, the hardware, it’s
an important part of the whole ecosystem. But hardware, as we know, takes
a long time to develop. So to get a codec from
definition phase to implementation in software to
hardware, you’re typically talking about a three-year
timeline. It’s a lot. We think it’s too long. We’re trying to do things just
to shorten that time frame. But what we’re starting
to see now is we open sourced VP8 in 2010. We launched it at I/O. So now
that three-year period is beginning to elapse. The Samsung Chrome Books that
we launched last fall, the very thin light ones, they have
a Samsung ARM platform in them that supports
VP8 decoding in hardware, for example. The same is true with
the Nexus 10. The Samsung S4 phone that just
launched has VP8 decoding hardware in it. So there’s also Qualcomm
and Broadcom. There’s a whole host of these
vendors who are starting to ship SoCs– S-O-C– System-on-Chip that have VP8
decoding block in them. And encoding blocks are
starting to come, too. So you’re starting to
see that happen. And with VP9, we’ve already
been discussing this with vendors even before it
was strictly defined. And their response has
been they’re very excited about VP9. YouTube has a lot of
plans for VP9. So these are some of the things
that drive the hardware development. And we also have a whole team
of engineers in Finland who their primary work is to design
hardware codecs that these vendors can implement. And we license those
free of charge. So there’s all sorts of hardware
things going on. SAM DUTTON: Some really
interesting stuff coming out of that. By the way, we’ve got
some slides online. And I’m going to show up on the
screen the slide URL that can show you some
demos and so on. If you want to read more about
the VP9, there’s great presentation at Google I/O.
There’s a link to it there in the slides. And we’re talking a bit
about the full screen. There’s a URL for a full
screen demo there. I just wanted to briefly kind
of show off the Media Source Extensions stuff. So if we just go to another–. JOHN LUTHER: Oh, cool. SAM DUTTON: I just wanted
to show this in action. Essentially what’s happening
here is we’re getting chunks of video and then playing them
out in a video element here, so getting the chunks
with JavaScript. You can kind of see the code
there, a little bit difficult to get to. But yeah, check that out. It’s pretty straightforward. If you want to have a look at
the code for that, that’s a good place to start. So I really wanted to get
started with just kind of an overview of what we’re doing
with EME, Encrypted Media Extensions, and getting
back to the topic. JOHN LUTHER: Well, that’s OK. There’s so much going on that
it’s hard to keep it all to one subject. SAM DUTTON: So EME, Encrypted
Media Extensions, is a JavaScript API to allow playback
of encrypted media. Now, this is a W3C proposal
for EME that’s been implemented in Chrome. And the important thing
is that it extends the HTML media element. So in other words, this is an
extension to the functionality of video and audio elements. So I guess the first question
that comes into my head is, what is the problem that we’re
trying to solve with this? JOHN LUTHER: Sure. HTML5, when I first came to
Google in 2010, and even prior that, everybody was talking
about HTML5 and getting very excited about it. Because there’s traditionally
and from my work at On2, we were involved with
Adobe and Flash. We had the first what we called
high-quality video codec in Flash, which
was called VP6. So as the technologies were
emerging and HTML.5 was coming on, and the video element,
everybody was very excited about it. But once we started talking to
developers and other vendors and things, they said, well,
yes, it’s a very compelling story, but it’s lacking. In other words, a lot of them
said, we don’t want to rely on these runtimes. We don’t want these mysterious
things that run in the object tag. And you might have security
problems. But at the same time, they
wanted all the functionality that those things had. So the things that they thought
were missing were things like full screen. For various reasons, there was
no full-screen provision in the media element. Adaptive streaming, which you
just showed, that wasn’t– video was almost like the image
tag, just video source equals that file. Play this fast, whatever. And then what a lot of
them said is we want to serve paid content. The copyright holders of that
content have certain conditions. They’re called robustness
requirements that playback agents, clients, browsers
must satisfy. So they said, there’s
other use cases. We talked to some people who
wanted to do corporate training, things that were
private to their company, but they didn’t necessarily
have to lock it down. They just wanted to do an
encrypted stream using HTML. So that’s something. This word kept coming up–
encryption, encryption, encryption. So we looked into it, and we
said, well, probably the main HTML spec is not the place
to propose this. So we proposed the EME spec. Full screen was proposed
by Mozilla and Google. And that is now, I
think, a W3C RFC. It’s at standard, basically. SAM DUTTON: It’s widely
implemented. JOHN LUTHER: So EME, we
collaborated with Microsoft and Netflix to write that spec
as well as the Media Source Extensions spec and have
implemented both in Chrome. Microsoft has implemented them
in IE11, which they’ve announced recently. And so the problem, I guess,
was just bringing the web platform to feature and
functionality parity with traditionally plug-ins, more
accurately, runtime plug-ins. Things that were doing all these
things already, and they said we need the same functionality in the web platform. That’s how it all kind
of came together. SAM DUTTON: OK, so the
technology we have, which is now in Chrome and not behind a
flag or can be disabled, I believe, from behind the flag. JOHN LUTHER: Yep. SAM DUTTON: Is the technology
codec dependent? Is it important what codec
we’re using with this? JOHN LUTHER: In Chrome, WebM
is our preferred format, of course, because it’s open. And we also support H.264. But the specification itself is
not dependent on any codec. It’s really up to the
implementers, the browser maker, what codecs they
want to support. But it’s not a determining– in other words, the video tag
itself that is going to render the content and decode it. In other cases, the CDM, which
we can talk about later if you want, will do that. But yeah, the short
answer is that it is not codec dependent. SAM DUTTON: OK, so I guess kind
of maybe the best way to begin with this, if we could
go to my slide, I’ve got a rather complicated
overview of this. But I’m going to try and talk
through an overview of this of how EME works in practice
with the API flow. And then I can get you to go
into some more detail about some of this stuff I’d like
to know a bit more about. Just working through this from
start to finish, imagine you have a web page, a web
app that has a video element in it. And you want to be able to
play some media in that element, but the media
is encrypted. So what happens with EME is that
when the video has been parsed, the browser can detect
that the media is encrypted. And we’ll talk a bit
more about that. And the browser then
fires what’s called a need-key event. And once the application– so your web app with its video
element– receives that event, then it can go through the
process of getting a key in order to interact with
the browser– and we’ll talk about CDMs– in order to decrypt the video
and play it out again. So yeah, it’s this process of
realizing that media is encrypted and then getting
a key, getting the media decrypted, and then playing
it out once that happens. So if we go to a rather more
complex diagram here, John can kind of talk us through this
in some more detail. I have some more questions. JOHN LUTHER: So this diagram
is showing another– like I said, there are a bunch
of use cases for encrypted media in the browser. And one of them is for paid
content and rights management, as I mentioned, these
requirements that copyright holders have to license
the content. So this is a case of a CDM that
we have created and are now shipping in Chromebooks,
which is based on Widevine, which is a company that Google
acquired I think in 2011, somewhere around there. So this content decryption
module, these modules have essentially three functions
in software. There’s the license
acquisition. SAM DUTTON: So this
is a plug-in. This is a Pepper plug-in,
in this case. JOHN LUTHER: In this
case, yeah. So in software, it also decrypts
the encrypted video, and then it decodes the
video bit stream. And then it hands the raw frame
data to the video tag for display. Now a CDM doesn’t have to do
all three of those things. For example, if you have
hardware on the device that can do the decrypting and the
decoding, you can do it there. There’s some other provisions
that you have to do. But essentially, yes, the CDM’s
job is to get a license with a key to decrypt the video,
do that in an encrypted manner across the internet,
and then decode and decrypt the stuff. That’s just basic for that. SAM DUTTON: So for the
non-video buff in the audience, one thing I just
wanted to clarify is the kind of confusing terms, like just
the difference between decoding and decrypting. JOHN LUTHER: I know. I’m sure there are lot of people
out there that think they’re the same thing. And actually, I have a tendency
to go really nerd on this stuff, because I worked at
RSA, which is an encryption company, prior to getting
into video. So anyway, decrypting. Encrypting data, you just take
it, and you sort of scramble it up so that somebody can’t use
it or read it unless it’s been descrambled. So decrypting is the process
of descrambling something that’s been scrambled using
encryption technology. Decoding is more accurately
called decompression. So you have a video frame that
has been compressed with a codec, like VP8, to make it
much more efficient to transmit over the internet. Raw video is very big. Compress it down, transmit it
over a network or an optical disk, whatever. To restore it, to show it
to the user, you have to decompress it, which video
people like me call decoding. But it’s more accurately sort
of like a zip file. But it works in a slightly
different way. But you’re just restoring
it from a compressed state back to– SAM DUTTON: To the full. JOHN LUTHER: Yeah. It’s not quite it’s original. Because like all compression,
some data has been lost. But yes, you’re restoring it
to the extent that you can show it to the user. Anyway, sorry about that. SAM DUTTON: Two things
that spring to mind. The first is, how does the
browser know that the content is encrypted? JOHN LUTHER: So part of the– as you said, the parsing, and
again, these terms of art keep it– like demuxing. OK, let’s see. What is this file? What is its properties? There would be part of that
header elements, as they’re known in WebM Matroska. Anyway, say this has
encrypted elements. This file has some encryption
and it. And then that’s what sort of,
as you mentioned, starts the whole process with saying to the
Encrypted Media Extensions implemented in the browser,
we need a key for this. You know how to do that. And then also, the Application
Developer. I should mention that a lot of
this and the reason that we designed EME and MSE in these
ways is so we want to keep as much application logic as
possible in the hands of the developer with JavaScript,
HTML. Really, we want to keep this
stuff above the browser stack as much as we can. So the Application Developer,
you can do all sorts of stuff once these events start
to get fired. But that’s really what starts
the ball rolling. This media is encrypted. SAM DUTTON: I have this
media container. JOHN LUTHER: We need to get
a key to decrypt it. SAM DUTTON: Right, right. JOHN LUTHER: Let’s do that. SAM DUTTON: And from what I
understand, too, you don’t have to encrypt every single
frame of the video, which I imagine is– JOHN LUTHER: No, you could. It can be very granular. I mean, you could even encrypt
what are called slices of a video, just specific
parts of a frame. But if you wanted to do more of
a lightweight approach, you could just encrypt
the key frames. And again, this is getting into
the parlance of video. SAM DUTTON: Yeah, yeah, yeah. JOHN LUTHER: So in a compressed
video stream, there are frames that are
the full picture. And then there are frames
between those that, for the sake of efficiency, redundant
data has been removed from them. So to reconstruct any of those,
they’re called inner frames, frames between key
frames, you have to reference back to a key frame. SAM DUTTON: Gotcha. JOHN LUTHER: So if you encrypt
all those key frames, you have a very hard time restoring the
inner frames if the key frames are all encrypted. I mean, there might
be parts of it. But yeah, so encrypting the
key frames is an approach. SAM DUTTON: Depending on how
obsessive you get, I guess. JOHN LUTHER: Right, yeah. SAM DUTTON: So looking back
at the diagram of the architecture, so we’ve got it
in this case a Widevine CDM, and we have a Widevine server
for the key process for getting a key for decryption. Yeah, could you just talk us
through a little bit about what the– in this diagram the
Widevine server is, what that’s doing. Because I guess there could be
confusion with Widevine, that we have the Widevine CDM
and then the Widevine key server as well. JOHN LUTHER: Yeah, the server is
managing the keys that are necessary to decrypt
the content. It depends on implementations,
but that might be one key. But also, there are other
approaches where you can change the key every so often
if different sections of the video can be encrypted
with a different key. So really, in the case of a
Widevine situation, what you’re really doing is
acquiring a license. Part of that license is the key
to decrypt the content. There are other policies
that can be included in the license. But primarily, that’s the job of
the server, is it knows the relationship between the
encrypted streams and the keys that are needed to
decrypt them. I guess I should clarify
that it doesn’t do user authentication. That’s sometimes what
people think. Well, it knows who I am. That’s not really true. We, again, keep everything
above the stack. We want all user authentication,
the authorization, those sorts of
things to be handled in the application. This is just to get the key. Again, all of it is– SAM DUTTON: Single task. JOHN LUTHER: –get the key. We need to decrypt this thing. And in Widevine, that
CDM can enforce rights management policies. But really, again, the primary
job is get the key, decrypt the video, show it
to the user. SAM DUTTON: Sure. And the decryption of the video
is actually happening in the CDM plug-in. JOHN LUTHER: In the case
of software, yes. In the case of hardware, and
again, you want to do as much of this stuff in hardware as
you can, because of the efficiency. Battery-powered devices like
tablets and mobile phones, if you were playing video in
software, it tends to use more energy from the battery than
doing it in hardware. As I’m sure our audience knows,
any specific hardware that does a specific function
is much more efficient to doing it in software. But yes, in the case of the
Widevine CDM that is deployed in Chromebooks and is coming
soon to desktop Chrome and also Android, it’s doing these
things in software. Our next step is doing as much
of it as we can in hardware. SAM DUTTON: OK. From what I understand from
basic implementation of EME, we have something called
clear key encryption. Can you just talk
us through that? What that means at that level? JOHN LUTHER: Yeah, clear key
means that the key itself is in clear text. In other words, the transmission
of that key, the acquiring of it, is left to
the application, again. So in the most basic form of
an EME implementation, it’s sort of analogous to encrypted
HLS, the adaptive specification that Apple
published through– [INTERPOSING VOICES] JOHN LUTHER: Yeah, yeah. It’s just an encrypted stream. All it needs to do is get that
key and start decrypting it. So in a clear key
implementation, all you’re doing is transmitting that
key in whatever way. Once it’s on the client,
then it can start decrypting the video. In other words, it’s slightly
different from like a Widevine CDM scenario where it’s not only
getting a key, but it’s also getting a license. Rights management stuff
comes into it. Clear key can be done with no
CDM whatsoever, all done in the browser. The application gets the key. It’s all 100% done in the
browser, no CDM required. SAM DUTTON: Right. Sure. So kind of the simplest
possible way into a– JOHN LUTHER: Yeah, there are
use cases for it that– uh-oh. You’ve gone to sleep. SAM DUTTON: I’ve gone
to sleep, yeah. JOHN LUTHER: Yeah. Again, we talked to people, and
they said they just want to do some basic encryption like
you can do with encrypted HLS, like you can do with
RTMPE in Flash. And that was the simplest use
case and also conveniently the one that could be done entirely
in the browser with no reliance on any
other technology. SAM DUTTON: So to be able
to implement something and test stuff out. JOHN LUTHER: Yeah, test stuff
or even production. There are production use cases
for it, like the corporate video thing I mentioned
or anything like that. SAM DUTTON: I think what would
be nice would be to actually demonstrate EME in action. Let me just go to there. That’s pretty good
on the desktop. Nice. OK, so I guess, John, you can
talk us through what’s going to happen in this. We can bump up the size
a little bit. JOHN LUTHER: So this is a demo
that our WebM team put together, specifically a guy
named Frank Galligan and also some team members from our
Chrome Media team in Kirkland, Washington. So anyway, what is
happening here? This box, where it says Load MPD
file, that is a manifest for doing this adaptive
stuff that we– SAM DUTTON: I think I have
a link to that here. Hang on. Let’s see if we can see that. JOHN LUTHER: It might
want to open it in a text editor down here. SAM DUTTON: Yeah, thank you. Yes. JOHN LUTHER: It’s an XML. This dash, we should
also mention is a– I think it’s now ratified as
a standard through MPEG for doing adaptive streaming. So what we’ve done is with WebM,
we’ve done a dash-like implementation, because the dash
spec itself doesn’t call out WebM as a format, but there
is a provision there for doing formats other than MPEG. SAM DUTTON: Let’s bump that up
so we can see what’s going on. JOHN LUTHER: So this
is XML, as people, I’m sure, are familiar. So you’re just specifying, OK,
here are the URLs of the different bit rates of video. So you give them each IDs and
give a range of where their indexes are so you can figure
out the index of the file. And then what this is really
doing, the way that this demo is doing, is just byte
ranges of the video. Other adaptive implementations,
like when you created the content, you
had to physically– SAM DUTTON: Yeah, I remember
that from the early– JOHN LUTHER: –10-second
chunks. SAM DUTTON: Hundreds
of chunks. JOHN LUTHER: And we talked to
content providers and YouTube people, and they said, managing
these files is unbelievably complicated. SAM DUTTON: So anyway, you
just have one file. JOHN LUTHER: You just say, OK,
I know the index of it. And anytime I want to switch– SAM DUTTON: Now, that’s cool. JOHN LUTHER: –I’ll get out
the next byte range. SAM DUTTON: So what we’re seeing
here is a bunch of, essentially, URLs and byte
ranges of the different segments of the video. JOHN LUTHER: Yeah, and it’s
saying, OK, so anytime you want to switch to another– if we detect or if the
application more accurately detects that the user’s bit
rate has decreased or available bandwidth is
decreasing, it’ll serve them a lower bit rate. SAM DUTTON: So let’s
light that up. JOHN LUTHER: You can
zoom it, maybe– SAM DUTTON: Yeah, OK. JOHN LUTHER: Because this is– SAM DUTTON: Hopefully you can
still see that if I make it a bit bigger. JOHN LUTHER: So what you’re
seeing in these red bars below here– and I hope the audience
can see this. You see these green blocks
as it’s playing along. What that’s saying is, OK, the
green means I’m now playing that data rate. So it started low, which
typically in an adaptive case you do, because you always want
to start at the lowest because you don’t really that
much at that point. But as you learn more about what
the users’ bandwidth is as you go through it,
and if you see here, you’ve limited it. So if you bump that up
or down, either way– SAM DUTTON: Let’s bump it up and
see if we get some more. JOHN LUTHER: It might have to
wait until there’s a next key frame to bump it up. Yeah, see now it’s starting
to climb up. Now it’s up to 1,488. So then if you maybe try to– SAM DUTTON: Take that down. JOHN LUTHER: –take down
to 300 and see. SAM DUTTON: Let me
just mute that. It’s slightly noisy. OK, now we’re getting
about 256. JOHN LUTHER: OK,
is it dropped? SAM DUTTON: Yeah, it’s
dropping now. JOHN LUTHER: So this is all
being done in the Web App. None of this is, other than
the MSE, the Media Source Extensions, which allows, again,
the chunking, which I should mention can be used for
things other than adaptive streaming, like nonlinear video
editing in the browser, all sorts of nifty stuff
you can do with MSE. SAM DUTTON: OK, sort of like
time shifting and stuff? JOHN LUTHER: Or anything that
you take a chunk of video and do something with it, and then
even hold in the buffer and do something with something else. Yeah, there’s all sorts of
things that can be done with MSE in addition to this. So there, now it is. Now we’re back down to 256. And I don’t know if people out
there would be able to see this, but the quality has
decreased, because you’re only down to 256. It’s still pretty
good, because– [INTERPOSING VOICES] JOHN LUTHER: So the practicality
of this is it enables you to use this same
player, this same quote application to serve to someone
on a 3G network or maybe even a 2G network if you
had very low data rates. But also, the same app is
serving people who might have a 20 megabit connection or a 40
or whatever or Google Fiber at home where you can then serve
them a really, really high quality– SAM DUTTON: And we get this kind
of seamless shift between whatever works, depending
on the context. JOHN LUTHER: It’s constantly
figuring out what’s best for the user. In most cases, you’re not going
to have very drastic, like from 256 bumping
up to 10 megabits. But it just is able to provide
the user with the best. Because in video, the higher the
data rate, typically the better the quality, because
you have more data to work with from the compressed
streams. SAM DUTTON: Yeah, yeah. That’s brilliant. JOHN LUTHER: And this demo is
actually also using the clear key encryption. SAM DUTTON: Right, right. So this is, I guess, a great
place to start if you– JOHN LUTHER: Yeah, this is
everything rolled into one. SAM DUTTON: Get your
head around. JOHN LUTHER: And I think
this one had captions. SAM DUTTON: We might have gotten
rid of them, yeah. JOHN LUTHER: It has full screen,
if you want to do– SAM DUTTON: Yeah. So we’ve got nice full-screen
action going on there. JOHN LUTHER: So this
is all HTML. SAM DUTTON: I don’t know
if we’ve got a track element in this. JOHN LUTHER: This part of the
full-screen API is the permissions of, OK, it
lets you know you’re now showing the full– it’s taking up all the real
estate of your machine. My understanding is that’s one
of the reasons why there was no full-screen provision in
the original HTML spec, because people had security
concerns or something. But anyway, it’s been
solved with– SAM DUTTON: Yeah, because
you had this explicit opting-in at the time. JOHN LUTHER: Yeah, the
full-screen API has brilliantly solved all those. SAM DUTTON: You can check out
the example in the slides actually if people want to
have a look at that. Yeah, it’s a great API. I’m a fan of this because it
has every aspect covered. We have events going
full screen and moving out of full screen. We have CSS for when an element
is full screen. We also have this flexibility,
whereby you can full screen an entire page or a single element,
which is very handy for a page like this where we
might want a full-screen– JOHN LUTHER: The video
tag, yeah. SAM DUTTON: –video
and not just full screen the entire page. That’s great. So that’s a good place to get
started if people want to look at the whole world of EME. JOHN LUTHER: There’s
no SWF file there. It’s all HTML. You right click, View Source,
and you can see everything that’s going on there from
the Media Source Adaptive Streaming, the clear key
encryption, full screen. It’s all there in plain
text for anybody. SAM DUTTON: It’s good stuff. It’s a nice video, too. So I’ve got some links up on the
screen now if people want to have a look at the Encrypted
Media Extensions spec and the Media Source
Extensions spec. That’s on the W3C sites there. Some other links to various bits
and pieces, including the Media Source Extensions demo
that we showed you earlier on. There’s some demos you might
want to look at there, the example we just gave. There’s some other stuff using
dash and MSE as well. And there’s the original demo we
had with a blip, blip, blip from earlier on. So thank you very much, John. JOHN LUTHER: You’re
very welcome. SAM DUTTON: We will be, I think,
doing some more of these GDLs in relation to EME
and keep up to date with the state of implementations and
talk through some more about implementing EME in
apps on the web. JOHN LUTHER: Great, OK. Thanks very much. Thank you everybody out there. SAM DUTTON: Thank you.


11 thoughts on “An introduction to Encrypted Media Extensions (EME)”

  • Unchecked "Allow identifiers for protected content" in Google Chrome under advanced settings -> Content. Who will that put me at odds with? Be nice for for low power devices to have huge fans like the satellite recievers to cool the decryption hardware, won't it? Google going to pay users  extra bandwidth costs because of DRM?

  • As long as it's not used to encrypt my uploads to YouTube, I'm fine with it. I just choose not to run a CDM, because I don't buy premium content. I don't pirate it either. Today's movies suck, so I just don't watch them.

    But as far as user uploaded videos go, I will NOT allow YouTube to encrypt videos that I upload. If they ever encrypt uploaded videos, then I'll just move to another site, and post links to them on vimeo or something. I am really tired of DRM, and I am afraid the RIAA could push Google into using this to encrypt videos that users upload. I would rather delete them then have them delivered on a DRM-encumbered platform. My videos will not be allowed to become Defective by Design!!

    Too bad I can't have Google sign an EULA forbidding them from encrypting my content, and use the same laws they use against us, against them. >:(

  • Say a user is authorized to view the video, what is stopping that user from stealing and redistributing the video if the key is known at client side?

  • When support for Netflix movie streaming at the resolution 1080p will be available? This is especially important for Win7 users (all browsers support only 720p Netflix movies in Win7, there is no Netflix app for this system).

  • eme is a monster, u are destroying the internet, how much money did google throw at the w3c people? how many millions of bribing to get drm cancer into their standards to destroy the web?
    how much did it cost the monster that is google, to end the internet as we know it?

Leave a Reply

Your email address will not be published. Required fields are marked *