Wednesday, June 19, 2013

ETIA: Enjoying content with an Internet second screen

Al Kovalick, Founder of Media Systems Consulting, kicked off this afternoon's session on "second screen" viewing of Internet content with an overview of the many possible meanings of second screen -- ranging from smartphone to smartwatch -- and the wide variety of possible uses.

Colin Dixon, Founder of nScreenMedia, led off the panel with the amazing stat that 278 million second screen TV apps had been downloaded in the US by the end of 2012 -- over three times as many as the total number of households in the US with TVs and broadband. An amazing 11% were the Shazam app, behind the overall leader of Netflix with 12%. Overall, apps related to the TV content drove the market. Apps for social TV check-in sites were way down and a very small part of the market -- usage was even worse.

Going beyond applications, Colin explained his vision of second screens moving from being a replacement TV and enhancing TV to really changing the TV experience itself, by making it a truly personal experience. His examples harkened back to yesterday's session on user-contributed content and interactive media experiences.

Mobovivo is augmenting 21 hours per week of TV, per CEO Doersken
Photo by David Cardinal
Trevor Doersken, CEO of Mobovivo, explained that second screens can represent an additional ad revenue opportunity for content providers, since they can be a powerful amplifier for the primary experience on the main screen. He and Mobovivo aim to harness this potential and turn what has been a threat for "big screen" content providers into an opportunity.

Echo is taking advantage of second screen opportunities by allowing companies to create second screen experiences with their toolkit, explained Khris Loux, CEO and co-Founder of Echo. A key element of the winning strategies in his mind is taking control of the user interaction -- use social networks like Twitter, but don't become so dependent on them so you need to revenue share.

Hardie Tankersley, Fox Broadcasting, gave some more examples of second screen experiences. In his experience the hardest part of getting people to use second screen experiences is explaining it to them. Creating the FoxNow app has given Fox a unified platform for augmented experiences across all of its shows. He identified the additional issue of the second screen taking away attention from the main screen, as well as the opportunity to syndicate second screen content into other second screen apps.

The panel agreed that there is room for some standardization of at least the format of second screen content, and perhaps even for a platform for presenting it, but also that the follow-on issues including monetization would create a substantial hurdle for those efforts. -- David Cardinal

ETIA: How can we improve multimedia quality over the Internet

Going forward there is clearly plenty of room for improving the quality of Internet-delivered media. Joyce Farrell, Stanford SCIEN, led the next panel on some of the strategies for making this happen. To get started, John Apostolopoulos, VP & CTO, Cisco took us through how media gets from the cloud to the home.

Today's cloud features highly-customized and expensive clouds like YouTube, Netflix and AWS. John touted the future benefits of OpenStack as a way to do large-scale, cloud-based delivery more easily and at lower cost. He expects the result to be a greater diversity of user-generated content. Comcast is already an early adopter of OpenStack technology for its cloud-based video services.
Cisco's Apostolopoulos shows that Comcast is on record as supporting OpenStack.
Photo by David Cardinal
Pushing the envelope of immersive audio experiences is the job of Dr. Sunil Bharitkar, Director of Technology Strategy at Dolby Labratories. Beginning by arguing that audio should get a larger share of the total bandwidth -- quoting George Lucas that "audio is 50% of the user's experience" -- Sunil went on to explain some of the ways it could be used. An important "first step" is the continued migration of audio technologies from the cinema to the home. Further migration of those technologies to mobile has some of the same challenges as Ricardo Motta mentioned with video -- in this case mediocre speakers and noisy and unpredictable listening environments.

Dolby's variable backlight displays have established a reputation for excellence. Scott Daly, Principal MTS at Dolby Laboratories plays a large role in ensuring that continues. He took us through how Dolby thinks about video quality, and how it depends on the image source material and the viewer. Using the example of tuning an HDR image for viewing on a variety of monitors Scott showed how it required adaptive tone mapping to look its best as the quality of display and the viewing conditions are changed.

Like the last panel, these experts largely agreed that better communication between the content creators (whether it is through literal tools, metadata, or additional content streams) and the display devices will be needed for improved display quality under a variety of conditions. Whether that turns into head-end rendering into narrowcast streams as Scott discussed, or metadata-enhanced single-streams that Pixar and nVidia would prefer -- or a combination of both -- still remains to be seen. -- David Cardinal


ETIA: Creating quality content for the Internet, Pixar & nVidia

After a quick introduction from Barbara Lange who explained a bit about SMPTE for the benefit of the 50% of attendees who are not members, and Joyce Farrell who covered her excellent SCIEN industry outreach program at Stanford (and also our conference co-hosts), Pat Griffis kicked off a pair of panels focusing on delivering quality content over the Internet.

Quality content starts with a quality story, told with quality tools. Rod Bogart, color scientist at Pixar, is responsible for preserving and enhancing the original creative concepts behind Pixar's movies through the production pipeline and into the final product.
Rod Bogart of Pixar explains how shading works for animated features.
Photo by David Cardinal




Rod took us through the many steps involved in producing a quality animated feature, and stressed the high degree of quality control at Pixar that ensures that the final film, Digital Cinema, iTunes, and HD Blu-Ray products are as awesome as possible. Along the way there are plenty that can go awry, starting with mis-matched aspect ratios and playback frame rates -- as well as the obvious problem of low resolution devices and low-bitrate data links hurting image quality. Less obvious are issues with the setting -- viewing in bright or noisy rooms for example.

We all know that batteries are the bane of mobile electronics, but no one better than Ricardo Motta, CTO at nVidia. Ricardo's job is to do the best job he can of rendering content on our smartphones and tablets without draining our batteries any more quickly than necessary. Of course the first problem faced by mobile media renderers is the complete lack of knowledge of the viewing environment. Ricardo explained that all the knowns we rely on in the studio or the cinema become unknowns in the context of mobile.


To help address these shortcomings of mobile, the nVidia imaging pipeline actually uses adaptive processing to adjust the contrast, dynamic range and white balance of the image both to improve the likely viewing quality and battery life. For example, some newer mobile devices have color ambient light sensors that can be used to adjust the display white point in real time.

In response to Pat's questions, Ricardo and Rod agreed that better metadata about the artistic intent of each scene could improve both the quality of the viewing experience or extend battery life. Rod in particular felt that metadata was a much more practical solution than trying to create additional masters for different conditions. -- David Cardinal

Tuesday, June 18, 2013

ETIA: Smarter streaming on the Net with DASH

Streaming is great, except when things fall apart. We've all seen the various combinations of "buffering" messages, spinning balls, or frozen screens that marks a failure of the network to keep up with the source material. Streaming technology that adapts to circumstances in real time is definitely a big part of the solution, but is easier said than done. In the first panel this afternoon, Dolby's Richard Doherty lead a session focused on alternatives for making it a reality.

Will Law of Akamai provides a live demo of DASH
Photo by David Cardinal
 Richard emphasized the scope of the problem by telling us that most viewers expect a video to start within 2 seconds. With YouTube serving over 13 billion videos, that's a lot of video on demand. Historic attempts to tackle the problem were modifications to Adobe Flash, Microsoft Smooth Streaming, and more recently Apple HLS. Dynamic Adaptive Streaming over HTTP (DASH) is the ISO standard to create a standard as part of the MPEG effort.

Richard also explained that one of the big advantages of DASH is that it operates entirely over HTTP, so it doesn't require any new low-level protocols to be supported. Being an ISO standard, it is also open and can be implemented by anyone.  His introduction segued into Will Law, architect at Akamai's demo of DASH being used to adapt a video stream to real time viewing conditions. Showing the HTML behind the demo showed that it was simple Javascripts that tied into the new Media Source tag.

Mark Watson, Director of Streaming Standards at Netflix made it clear that Netflix wants to push the technology envelope to get its contents to as many people as possible, and are committed to MPEG and to moving to HTML5 and DASH. Like Netflix, tech giants Microsoft and Adobe have also agreed to move to DASH as their adaptive streaming technology. Netflix's move was made easier because they have moved all their rendering and delivery to the Amaxon EC2 cloud, so it is highly scalable. However, panelists agreed that Apple, while instrumental in helping get DASH technology off the ground, has not signed on to DASH and instead is currently still committed to its own HLS technology.

Elemental was founded to build codecs for what were then the new technology of GPUs. Jesse Rosensweig, CTO of Elemental, explained that it was hard to make money doing that, so Elemental moved into the video transcoding market, which has been a rapidly expanding market as entertainment over the Internet has taken off.

Asked about the challenge of delivering quality on the internet, Will Law pointed out that cable companies have huge bandwidth direct connections, so consumers expect a lot any time the internet competes with it. Mark explained that things were even more complicated because it is difficult to get accurate data on which aspects of video quality are really important to users.

-- David Cardinal

ETIA: Future of File Formats -- Introducing IMF

You know you're at a SMPTE event when there is an entire session devoted to file formats. Howard Lukk of Walt Disney made a compelling case why for them file formats are a big deal, and worth big money to sort out. With all the different channels, languages, formats, and customers Disney has, it often needs to create over 35,000 of a movie.
Howard Lukk explaining why Disney needs IMF
Photo by David Cardinal



Dr. Pierre Lemieux, from Sandflow took us through some more of the details of the new Interoperable Master Format (IMF), although at warp speed. The key elements (or "essence") of a film are wrapped into a manifest that allows a single master to produce many, if not all, of the many different versions that studio needs. In addition, common frames like test images don't have to be included in each master, but only need to be referenced.

Very roughly, an IMF starts with a packing list (manifest) of source material, and a composition play list (CPL). Each is encoded in XML, as are Output Profile Lists (OPLs) which specifies how the CPL is used to create a set of track files from the source information. Pierre explained that XML was the clear choice for encoding, but that the actual packing and distribution was left up to specific implementations. To assist in extensibility, the format also allows for supplemental asset lists.

John Hurst, CEO of Cinecert, took the discussion further into the future, explaining how IMF files might eventually be delivered directly to consuming clients, instead of having to be transcoded for specific output formats in advance. For example, Internet vendors like Google, Apple, and Microsoft might eventually get a single IMF version of a movie that they in turn could render and play as needed, rather than needing to get final output versions.

The key to this is yet another file container format, Material eXchange Format (MXF), which is the structure used for organizing the source data for a movie or other IMF product. By implementing systems that can understand MXF, companies can in principle do their own version creation as needed directly from the IMF-version. John took attendees through the basic process of creating the needed code snippets to parse MXF and begin to build an IMF reader.

In response to audience members, the panelists explained that IMF itself wouldn't typically be used for Internet-based content delivery, but that by using it upstream it could improve the workflow of distributors and ultimately the viewing experience for users.

-- David Cardinal

ETIA: Creating content for the Internet

This morning's Creating Content session featured three different perspectives on options for the future of Internet-delivered entertainment content. Ann Greenberg, CEO of startup Sceneplay, is focused on the radical notion of users adding substantial value to entertainment products interactively. She is working not just to allow audiences to help direct the action of an experience, but contribute major original pieces. This "democratization" of cinema has been a vision of Ann since she was young, and clearly she is excited that the combination of the internet as a delivery system and metadata embedded in content to make it more intelligent are making it a reality.
ETIA Panel on Creating Content for the Internet
Photo by David Cardinal
 Carl Rosendahl, animation pioneer, has also left the traditional cinema business, and is focused on working with his CMU students to help them create radical new entertainment products. Peter Hirshberg, as befitting a veteran of the computer industry, is focused on how the "Internet of Things" can change the future of entertainment. He cited statistics that 25% of all tweets are about television, and that apparently 95% of discussion of TV is on Twitter. The result is first that social media is amplifying traditional media, rather than displacing it. It has also meant that the social media component of major entertainment franchises have become as important as their "on-air" communication.

Often of course, viewers are participating in both at once. As many as 77% of TV audiences have a second screen in use while they are watching. Hirschberg's vision is to harness these same technologies to create immersive experiences, instead of continuing to view the world through screens. His vision of the city of the future places the Internet and audiences front and center in creating their own entertainment.  He provided examples of large-scale interactive "entertainment" experiments, like large displays driven by city-wide energy use, or other "audience participation" real time art projects.

Panelists agreed that the Internet allowed content to focus on storylines, freeing entertainment up from being limited to a specific location like the theater or a specific technology like video. More dramatically, it clearly opens up new opportunities for audiences to not only interact with their entertainment, but to help shape and even create it.  -- David Cardinal

ETIA Kicks-off: Hollywood meets Silicon Valley

What better place to host a first "summit" between the entertainment industry and internet high-tech than Stanford, home of much of the innovation that has driven both industries. This morning Wendy Aylsworth, SMPTE President, and Bernd Girod, Director of the Stanford Center for Image Engineering (SCIEN) kicked off the first Entertainment Technology in the Internet Age (ETIA) conference. They were joined by conference co-chairs Dr. Joyce Farrell, Executive Director of SCIEN and Pat Griffis, SMPTE Education Vice President.

After Pat's introduction, Bernd reminded all of us of Stanford's seminal role as one of the first four Internet nodes and home of Vince Cerf -- as well as of Google, Yahoo! and other key companies. Less well known, we got to see some photos of Muybridge's motion studies, commissioned by Leland Stanford. Wendy stressed how the ETIA conference played well with SMPTE's focus on education, as well as building future collaboration opportunities with Silicon Valley companies, starting with SMPTE's quarterly standards meetings being held locally at Xilinx later this week.

For two days we'll be treated to an all-star lineup of industry veterans and academic heavyweights who will discuss all aspects of the technical and marketing challenges facing entertainment content delivery over all types of internet-based systems -- including the Internet with a capital I, as well as IP streaming over other channels. For those who can't be here in person, I'll be blogging the conference as it goes, and this evening's marquee event -- Legal and Illegal content distribution, is there a solution? -- will be livestreamed.The smpteconnect Twitter feed is also alive with action from the conference.

-- David Cardinal