Thursday, October 24, 2013

SMPTE 2013: Canon Previews Sophia Loren In 'The Human Voice'


During Thursday’s session on cinematography, Canon Senior Fellow Larry Thorpe previewed some clips from The Human Voice, a 30-minute short lensed with the Canon C500 in 2K 12-bit, using Cooke S4/i Primes and Canon Zooms.
Photographed by Oscar nominated cinematographer Rodrigo Prieto ASC, AMC the film stars Sophia Loren and was directed by the actress’ son, Edoardo Ponti.
It was was filmed last summer in Rome and Naples.
The footage was screened during Thorpe's presentation on lens considerations for digital cinematography.

Also Thursday afternoon, NHK’s Hiroshi Shimamoto reported that the company had developed a 120 fps 8K 12-bit CMOS image sensor, and recent built an 8K compact camera that uses this sensor.

Tonight, the conference will wrap with the Honors and Awards ceremony, hosted by David Wood, followed by the second annual SMPTE Jam.

I've look at screens and clouds from both sides now....


The ‘Demo room’ was a great achievement for SMPTE 2013, and we may look back on it in twenty years’ time in the same way we now look back on the 1892 demos at the SMPTE conference of digital TV and analogue HDTV.   So, what was on show?

One thing was an appraisal of what you actually get for your money on a UHD-1 TV with native UHD-1 compared to the same content at 1080p up-scaled.   There was a difference, but it was not huge.  

Another thing was the impact of using HEVC compression on image quality.   Estimates of at least a two to one saving compared to AVC seem to be well founded. 

Images of UHD-1 images with frame rates of 50 and 60 Hz were on show, and delegates could see the kind of quality that the new HDMI 2.0 will ‘let through’ to the display. 

There were also demonstrations of live UHD-1 content.

For the morning of the last day of the Conference, I also attended the stream of session on ‘clouds’.    At previous SMPTE conferences there has been an air of reluctance by the studios to use ‘clouds’ to store their most precious content, because they do not want even the remotest risk of harming them.   This year, at least apparently, confidence in, and use of, clouds has grown.  The use of ‘hybrid systems’ where you use your own storage when you have some left, and turn to a cloud when you don’t, was a popular theme, and may represent the shape of things to come.   Such things can be done automatically.  Clouds may be a competitive market, so one of the skills broadcasters may need is to ‘shop around’ for the cloud space.    

Deploying Video Platforms in the Cloud


Thursday’s program began with a session on deploying video platforms in the cloud, with speakers sharing their experiences and insights.

The messages included:

--Bhavik Vyes of Amazon Web Services detailed ways to build flexible and scalable media workflows in the cloud, using Amazon Web Services as a platform.
This came with a reminder that the increasing size of date files will place stress on the workflow. “It’s getting harder,” he admitted, “especially with the arrival of 4K.”

--“Go cloud first; but not cloud everything,” urged Andrew Sinclair, New Corp. Sinclair Media Technology.
He related that News Corp. has a “cloud first” policy. “We are looking for a SaaS platform,” he said, noting that it gives them flexibility since they don’t have to completely commit to it. “Beyond that we look for PaaS (Platform as a Service), down to Iaas (Infrastrucuture as a Service),  and then physical infrastructure.”

--Robert Jenkins, CEO of Cloudsigma, discussed the notion of the “hybrid cloud," suggesting that companies would generally leverage their existing investment and combine it with a public cloud. “It's a project-based industry so pretty much every production tends to be unique,” he said. “We have to accept that this isn’t a cookie cutting approach.”
He warned: “a lot of public cloud pricing is broken. … If you buy it for too long it’s too expensive. With the right price, the public cloud can be competitive.”

Wednesday, October 23, 2013

SMPTE Conference: Studying Acceptance of HFR 3D


Wolfgang Ruppel, professor at the Rheinmain University of Applied Sciences in Germany, presented the results of a study that the University conducted about the subjective acceptance of HFRs for stereoscopic 3D.
This was achieved using James Cameron and Lightstorm Entertainment’s “medieval feast” themed test footage comparing 24, 48 and 60 fps; and Circus, an animated short produced at RheinMain University, also rendered in 24, 48, and 60fps.
Ruppel reported that overall results found a preference for HFRs amongst the participants. More specifically, the findings suggest that the difference between 48 fps/eye and 60 fps/eye varied by the speed of motion. And 60fps/eye can make the most noticeable difference for mid to fast motion scenes.
A second test examined down-conversion from 48 fps/eye and 60 fps/eye to 24 fps/eye, compared with footage natively shot with 24 fps/eye.
The conclusion: downconversion appears to work well.
Have you seen these comparisons? What do you think?

SMPTE 2013 - A historic event for UHDTV



Such was the attraction of the special UHDTV exhibition, and the main exhibition area, that attendance in the main sessions today was more modest today than yesterday.   Though there was still interesting technology to hear about.  

They included the developments in the ATSC on a system for providing emergency warnings to users of the ATSC’s mobile system.  If there a hurricane coming, your mobile TV can switch itself on, and give you all the details.  It seems though that take up by the industry has so far been modest.   That’s the problem with things you only need very occasionally, but when you need them, you really need them.

There was also a presentation about technology to allow a broadcaster to increase the number of channels carried in a multiplex, which could silently and secretly be used for carrying content to a consumer or a daughter broadcaster.   The idea is that you discretely reduce the bit rate of the normal channels for a while,  and slip in the extra channel or content.  No new equipment is needed.  Could it be that the technical quality we broadcast could be raised or lowered depending on the audience size - like adaptive internet streaming in reverse?

For the UHDTV exhibition itself, there were many UHDTV monitors, comparisons of quality after compression, and more.  Maybe the SMPTE 2013 will go down in history as the day the story of UHDTV finally broke?    

SMPTE Exhibition Hall Is Now Open


The SMPTE conference's exhibition hall opened on Tuesday and was also the setting for the evening's opening reception.
Numerous stands have technology that underscores the themes of Monday’s symposium on next generation image formats.
Among them was a Cisco display presented during the reception, showing imagery using its 4K HEVC compression in 6 Mbps (1000:1 compression of the original file) in 4:2:0 color depth.
A sampling of the exhibitions:
At the Daystrom stand, Fusion-io is showing playback of 4K uncompressed imagery at 120fps on a Sony 84-inch 4K display, from an HP Z820 workstation equipped with four of its ioFX cards with 1.6 TB capacity.
Fusion-io’s director of visual computer Vincent Brisebois noted, “you can’t get that with SSD and RAID.”
The Fusion-io cards can be purchased standalone or bundled with certain models of HP’s workstations including the Z820.
NVIDIA is exhibiting a beta version of Red’s RedcineX Pro, which now has GPU acceleration from the Nvidia Quadro K6000. At the stand, it is being used for 5K playback in real time.
Also on the floor, Snell is showing its flagship broadcast automation system Morpheus and Momentum, a workflow-automated media asset management and resource-planning tool.
Dolby meanwhile is demonstrating the new model of its Professional Reference Monitor, the lighter and slimmer PRM-4220;  and its glasses-free Dolby 3D system.
The exhibition hall will be open through Thursday.

Networked production – not quite utopia?



A lot of focus on Day 2 was the transition IT based / Networked Media sessions.
The trend (of course) for production centers is to networks, where everyone in the production centre is interconnected.   But is this quite the utopia it sounds?  There will, we learned, still be headaches for the engineers who have to plan and install them.

The phrase "of the shelf" products for Networked Media needs qualifications. You cannot achieve the performance we need by combining different systems and switch supports. When considering the data transfer capacity of the studio links, you need to factor in the issues of buffering and synchronisation.  Also compression systems are not ‘reversible’, so expect generation issues.  In other words there may be no such thing as ‘off the shelf IT equipment’ for us.   

For those, like your blogger, who still love 3DTV, there was a small exhibit with an autostereoscopic display at the exhibition.  Oh, how the mighty have fallen?  The demonstrator told me that six views are created from two original views and displayed on adjacent strips on the display.  The result is reasonable if not perfect.  The stereoscopic phenomena where, when you move your head, the objects in the scene appear to shift laterally, is still there outside a modest viewing position.   But, hey, we should be thankful for small steps forward?        

Tuesday, October 22, 2013

Multi-View Production: New Ways To Capture Depth


Tuesday afternoon's SMPTE conference session on multi-view production included a preview of two developing camera systems that introduce new ways of capturing depth on set. Both were recently tested in the field.
Session chair Howard Lukk, Disney’s vp production technology, described the first as a “hybrid 3D” approach as it combines elements of both native 3D production and conversion. It's a prototype trifocal camera system that is effectively a rig holding one main camera (Alexa) and two small satellite cameras. This allows the filmmakers to photograph the imagery on set while at the same time generate depth information that can be used to create a stereo version in postproduction. This camera system was developed by Fraunhofer Heinrich Hertz Institute with Walt Disney Studios and ARRI.
The second, a “motion scene camera,” was introduced by ARRI’s principal engineer Dr. Johannes Steurer. This is a motion picture camera with a time-of-flight sensor for generating the depth information.
The day ended with the annual opening night reception in the exhibit hall.

Gewecke Keynote: Professionals Can "Redesign" Entertainment


“Over the next 5-10 years we have an opportunity to redesign what entertainment experiences will be … This is a time for us to create new projects and services.”
That was the message of Thomas Gewecke, chief digital officer and executive vp for strategy and business development at Warner Bros. Entertainment, who delivered a keynote Tuesday at the SMPTE conference.
One way that Warners is responding to change in the foreseeable future is by making its content as accessible as possible. Noting that Warners has more than 250 distribution partners in more than 100 countries, Gewecke said, “It needs to be in as many places as possible so consumers have legitimate way to access it.”
Looking at additional ways to buy, Gewecke described the Cineplex Superticket program in Canada that now allows movie-goers to  pre-order a digital copy of a film that they are seeing, while they are at the theater. “It’s helping them make the purchase at a time when they are most aware of the movie,” he said.
On potential new opportunities, Gewecke pointed out, “it’s difficult to reproduce the serendipity of walking around looking at things [in a store]. ... We are at the very beginning of reproducing that experience in the digital space. There are many technologies that could drive that.”
During his address, he also expressed interest in the potential of creating “communities of interest,” using as an example the recent Veronica Mars kickstarter campaign (more than 91,000 contributing to the project, which raised $5.7 million to fund a movie based on the TV series).
The takeaway: “It was about quantifying the number of fans. They created a community and effectively [learned the level of interest] before making the movie.”
Concluded Gewecke: “We think it’s critical to continue to assume that content is being changed by technology. We need a sense of urgency and it’s important that we act to drive innovation.”

WHAT DOES HOLLYWOOD WANT FROM UHD-1 ?


The grouping of Hollywood studios, Movielabs, revealed their own wish-list for UHD on the afternoon of the first day symposium at SMPTE 2013.   It’s quite a stack – and its seems to go beyond the features available in UHD TVs in the shops now.   The engineers in the audience were delighted when they heard of the high quality aspirations of the movie studios.    A little later in afternoon, another speaker practically got a standing ovation when he said ‘let’s forget 8 bit/sample’ (and go for higher bit depths).   Maybe as a group of engineers, we are happiest when providing the highest image quality that technology permits – but is this a fault?

One of the points of ‘stress’ in the movie studios wish list is the suggestion that UHDTV movies shown on TV sets should use the same colorimetry that is used for making movies, termed the ‘XYZ’ system.   Having the same system for making movies and for showing them on TVs would have benefits, and the XYZ system is very flexible.   But for TV broadcasting the price maybe higher bit rates without actually any higher quality for events like sports.  Maybe tests made in the coming months will shed light on this?

Monday, October 21, 2013

4K/UHD TV - Will it be a Hit with Consumers?


“Aren’t we living in the most interesting time? I truly believe we are” said SMPTE president Wendy Aylsworth on Monday as she welcomed attendees to the Symposium on next generation formats that kicked off the annual SMPTE Technical Conference and Exhibition. “Advances in wireless technology allow consumers to get content anywhere, anytime …  Home technology is allowing HFR, HDR, UHD – and these can greatly increase the content quality.”
But do consumers want UHD TVs, which are currently priced anywhere from $700-$40,000?
This is the question that Insight Media president Chris Chinnock asked the first panel of the business track, and this question was addressed throughout the day.
“Nobody has been making money for a while, but I see some reason to be optimistic,” said Peter Keith, vp and senior research analyst Piper Jaffray. “When I see a product, I ask ‘is it bigger and does the picture look better?’ 4K checks both those boxes.’
But  there are many considerations in this discussion and while initiatives in areas such as satellite and OTT 4K services are in the works, a key challenge of course is getting 4K content to the home.
Steve Venuti of HDMI Licensing believes 4K upconversion features on new displays could help to jumpstart the market. Added Samsung marketing exec Dan Schinasi: “The reality is that most of what we watch will be upconverted for the foreseeable future.”
But on another panel, Bryan Burns, the former ESPN exec who is now president and CEO of Forward Direction Group, offered his view on the subject of upconversion, saying, “I hope we don't hear, ‘if there was only more content [4K would gain momentum].’ If there’s is chip that upconverts everything for you, it make it harder for producers to invest in creating the content.”
Speakers also addressed the need for clear terminology with which to communicate the new capabilities to consumers. The Consumer Electronics Association uses Ultra HD as the industry term for 4K, but ITU for instance has a 4K and 8K flavor. “CEA has come up with a definition for the U.S., but they don't follow it in Europe [and other international markets]. There’s still a way to go,” said Paul Gagnon, director of global TV research for NPD Display Search.
In conjunction with this program, a 4K/UHD demo room has been organized to showcase 4K/UHD in both professional and consumer markets. It will remain open through Oct. 23.

4K in Movies and TV—Where Does it Make Sense?


There are “no major barriers” to 4K production using currently available gear, asserted Larry Thorpe, senior fellow at Canon USA, who kicked off a Monday session exploring the potential of 4K for movies ad TV.
Steve Weinstein, CTO of Deluxe Entertainment Services Group, said the issue surrounding 4K at his company is that “our storage has been blow out.
“This makes a distributed workflow much harder to do,” he said. “You end up working on proxys. We are looking at cloud transcoding to move this around. …The toolsets seem to be handling 4K reasonably well.”
He added that Deluxe is seeing increased labor in visual effects, requiring more “personnel and time.”
Offering a broadcasters perspective, Fox Sports vp of field operations Jerry Steinberg said the network currently has no plans to deliver 4K. “We spent millions going to HD and never got an extra dime from advertisers,” he said. “It seems today [4K broadcasting] is a monumental task with not a lot of return.”
But answering the question “where does it make sense?,” Steinberg related that Fox has been using Sony F65 4K cameras for HD sport coverage, for instance to extract a portion of a shot for a replay. “We were able to tell the story with clarity. This Super Bowl, I would probably have six 4K cameras.”

SMPTE 2013 CONFERENCE DAY 1: ARE YOU NUTS FOR NITS?


 The first day of the SMPTE conference symposium split into two streams – one on UHDTV technology and the other on Business aspects. 

You blogger joined the technology stream, where two issues predominated mostly about the lower UHDTV quality level UHD-1 (otherwise 4k). 

The first was about what ‘brightness’ should be assumed for UHDTV displays.  Display brightness used to be measured in ‘candelas/sq metre’, but a new shorter term ‘nit’ (which by the way means ‘idiot’ in colloquial English) is coming into fashion.   There were two schools of thought.  One was that UHDTV should be more like the real brightness range we see in nature, with 10-20000 nits.  The other was that if we did that, when a TV show was set on the beach, viewers in their living rooms would need to wear sun-glasses.    This is linked to the ‘opto-electronic transfer characteristic’ we need in the UHDTV standard.   The debate will no doubt be continued in the ITU later in the year.  Let’s hope for compromise.

The second was about the number of images per second needed (frame rate).  Though UHDTV sets going on the market today accept a frame rate of up to 60 frames/second – the frame rate used for HDTV today – evidence was presented showing that there would be a really dramatic improvement in image quality or sports if the frame rate were increased to 100 or 120 frames/second.  This quality jump could even outshine the jump in quality due to the increased resolution going from HDTV to UHD-1.   However true this is, it looks like we may have to wait some years before receiver encoders can cope with higher rates than 60Hz.

The discussion showed that we may have some way to go before all the i’s are dotted for UHD-1.

Sunday, October 20, 2013

SMPTE 2013 Symposium - Better Pixels but no 3DTV.

I'm not quite sure what 'better pixels' means (and what therefore 'worse pixels' might be) but it is the talking point at the SMPTE 2013 Symposium here in Los Angeles.  It seems to be a shorthand way of saying that having more resolution alone may not be enough to sell UHDTV, and that more goodies are needed such as higher dynamic range (giving the picture more sparkle) and higher frame rates (making the line-backers sharper when they run).  Looking at the programme, it seems it will be impossible not to emerge from the Symposium not an expert in UHDTV.  Some may see UHDTV as the great meeting place where movie production and TV production meet.....all will be revealed in the next few days.   By the way, I've sold my 3DTV glasses, because it doesn't look like they will be needed.   

Wednesday, June 19, 2013

ETIA: Enjoying content with an Internet second screen

Al Kovalick, Founder of Media Systems Consulting, kicked off this afternoon's session on "second screen" viewing of Internet content with an overview of the many possible meanings of second screen -- ranging from smartphone to smartwatch -- and the wide variety of possible uses.

Colin Dixon, Founder of nScreenMedia, led off the panel with the amazing stat that 278 million second screen TV apps had been downloaded in the US by the end of 2012 -- over three times as many as the total number of households in the US with TVs and broadband. An amazing 11% were the Shazam app, behind the overall leader of Netflix with 12%. Overall, apps related to the TV content drove the market. Apps for social TV check-in sites were way down and a very small part of the market -- usage was even worse.

Going beyond applications, Colin explained his vision of second screens moving from being a replacement TV and enhancing TV to really changing the TV experience itself, by making it a truly personal experience. His examples harkened back to yesterday's session on user-contributed content and interactive media experiences.

Mobovivo is augmenting 21 hours per week of TV, per CEO Doersken
Photo by David Cardinal
Trevor Doersken, CEO of Mobovivo, explained that second screens can represent an additional ad revenue opportunity for content providers, since they can be a powerful amplifier for the primary experience on the main screen. He and Mobovivo aim to harness this potential and turn what has been a threat for "big screen" content providers into an opportunity.

Echo is taking advantage of second screen opportunities by allowing companies to create second screen experiences with their toolkit, explained Khris Loux, CEO and co-Founder of Echo. A key element of the winning strategies in his mind is taking control of the user interaction -- use social networks like Twitter, but don't become so dependent on them so you need to revenue share.

Hardie Tankersley, Fox Broadcasting, gave some more examples of second screen experiences. In his experience the hardest part of getting people to use second screen experiences is explaining it to them. Creating the FoxNow app has given Fox a unified platform for augmented experiences across all of its shows. He identified the additional issue of the second screen taking away attention from the main screen, as well as the opportunity to syndicate second screen content into other second screen apps.

The panel agreed that there is room for some standardization of at least the format of second screen content, and perhaps even for a platform for presenting it, but also that the follow-on issues including monetization would create a substantial hurdle for those efforts. -- David Cardinal

ETIA: How can we improve multimedia quality over the Internet

Going forward there is clearly plenty of room for improving the quality of Internet-delivered media. Joyce Farrell, Stanford SCIEN, led the next panel on some of the strategies for making this happen. To get started, John Apostolopoulos, VP & CTO, Cisco took us through how media gets from the cloud to the home.

Today's cloud features highly-customized and expensive clouds like YouTube, Netflix and AWS. John touted the future benefits of OpenStack as a way to do large-scale, cloud-based delivery more easily and at lower cost. He expects the result to be a greater diversity of user-generated content. Comcast is already an early adopter of OpenStack technology for its cloud-based video services.
Cisco's Apostolopoulos shows that Comcast is on record as supporting OpenStack.
Photo by David Cardinal
Pushing the envelope of immersive audio experiences is the job of Dr. Sunil Bharitkar, Director of Technology Strategy at Dolby Labratories. Beginning by arguing that audio should get a larger share of the total bandwidth -- quoting George Lucas that "audio is 50% of the user's experience" -- Sunil went on to explain some of the ways it could be used. An important "first step" is the continued migration of audio technologies from the cinema to the home. Further migration of those technologies to mobile has some of the same challenges as Ricardo Motta mentioned with video -- in this case mediocre speakers and noisy and unpredictable listening environments.

Dolby's variable backlight displays have established a reputation for excellence. Scott Daly, Principal MTS at Dolby Laboratories plays a large role in ensuring that continues. He took us through how Dolby thinks about video quality, and how it depends on the image source material and the viewer. Using the example of tuning an HDR image for viewing on a variety of monitors Scott showed how it required adaptive tone mapping to look its best as the quality of display and the viewing conditions are changed.

Like the last panel, these experts largely agreed that better communication between the content creators (whether it is through literal tools, metadata, or additional content streams) and the display devices will be needed for improved display quality under a variety of conditions. Whether that turns into head-end rendering into narrowcast streams as Scott discussed, or metadata-enhanced single-streams that Pixar and nVidia would prefer -- or a combination of both -- still remains to be seen. -- David Cardinal


ETIA: Creating quality content for the Internet, Pixar & nVidia

After a quick introduction from Barbara Lange who explained a bit about SMPTE for the benefit of the 50% of attendees who are not members, and Joyce Farrell who covered her excellent SCIEN industry outreach program at Stanford (and also our conference co-hosts), Pat Griffis kicked off a pair of panels focusing on delivering quality content over the Internet.

Quality content starts with a quality story, told with quality tools. Rod Bogart, color scientist at Pixar, is responsible for preserving and enhancing the original creative concepts behind Pixar's movies through the production pipeline and into the final product.
Rod Bogart of Pixar explains how shading works for animated features.
Photo by David Cardinal




Rod took us through the many steps involved in producing a quality animated feature, and stressed the high degree of quality control at Pixar that ensures that the final film, Digital Cinema, iTunes, and HD Blu-Ray products are as awesome as possible. Along the way there are plenty that can go awry, starting with mis-matched aspect ratios and playback frame rates -- as well as the obvious problem of low resolution devices and low-bitrate data links hurting image quality. Less obvious are issues with the setting -- viewing in bright or noisy rooms for example.

We all know that batteries are the bane of mobile electronics, but no one better than Ricardo Motta, CTO at nVidia. Ricardo's job is to do the best job he can of rendering content on our smartphones and tablets without draining our batteries any more quickly than necessary. Of course the first problem faced by mobile media renderers is the complete lack of knowledge of the viewing environment. Ricardo explained that all the knowns we rely on in the studio or the cinema become unknowns in the context of mobile.


To help address these shortcomings of mobile, the nVidia imaging pipeline actually uses adaptive processing to adjust the contrast, dynamic range and white balance of the image both to improve the likely viewing quality and battery life. For example, some newer mobile devices have color ambient light sensors that can be used to adjust the display white point in real time.

In response to Pat's questions, Ricardo and Rod agreed that better metadata about the artistic intent of each scene could improve both the quality of the viewing experience or extend battery life. Rod in particular felt that metadata was a much more practical solution than trying to create additional masters for different conditions. -- David Cardinal

Tuesday, June 18, 2013

ETIA: Smarter streaming on the Net with DASH

Streaming is great, except when things fall apart. We've all seen the various combinations of "buffering" messages, spinning balls, or frozen screens that marks a failure of the network to keep up with the source material. Streaming technology that adapts to circumstances in real time is definitely a big part of the solution, but is easier said than done. In the first panel this afternoon, Dolby's Richard Doherty lead a session focused on alternatives for making it a reality.

Will Law of Akamai provides a live demo of DASH
Photo by David Cardinal
 Richard emphasized the scope of the problem by telling us that most viewers expect a video to start within 2 seconds. With YouTube serving over 13 billion videos, that's a lot of video on demand. Historic attempts to tackle the problem were modifications to Adobe Flash, Microsoft Smooth Streaming, and more recently Apple HLS. Dynamic Adaptive Streaming over HTTP (DASH) is the ISO standard to create a standard as part of the MPEG effort.

Richard also explained that one of the big advantages of DASH is that it operates entirely over HTTP, so it doesn't require any new low-level protocols to be supported. Being an ISO standard, it is also open and can be implemented by anyone.  His introduction segued into Will Law, architect at Akamai's demo of DASH being used to adapt a video stream to real time viewing conditions. Showing the HTML behind the demo showed that it was simple Javascripts that tied into the new Media Source tag.

Mark Watson, Director of Streaming Standards at Netflix made it clear that Netflix wants to push the technology envelope to get its contents to as many people as possible, and are committed to MPEG and to moving to HTML5 and DASH. Like Netflix, tech giants Microsoft and Adobe have also agreed to move to DASH as their adaptive streaming technology. Netflix's move was made easier because they have moved all their rendering and delivery to the Amaxon EC2 cloud, so it is highly scalable. However, panelists agreed that Apple, while instrumental in helping get DASH technology off the ground, has not signed on to DASH and instead is currently still committed to its own HLS technology.

Elemental was founded to build codecs for what were then the new technology of GPUs. Jesse Rosensweig, CTO of Elemental, explained that it was hard to make money doing that, so Elemental moved into the video transcoding market, which has been a rapidly expanding market as entertainment over the Internet has taken off.

Asked about the challenge of delivering quality on the internet, Will Law pointed out that cable companies have huge bandwidth direct connections, so consumers expect a lot any time the internet competes with it. Mark explained that things were even more complicated because it is difficult to get accurate data on which aspects of video quality are really important to users.

-- David Cardinal

ETIA: Future of File Formats -- Introducing IMF

You know you're at a SMPTE event when there is an entire session devoted to file formats. Howard Lukk of Walt Disney made a compelling case why for them file formats are a big deal, and worth big money to sort out. With all the different channels, languages, formats, and customers Disney has, it often needs to create over 35,000 of a movie.
Howard Lukk explaining why Disney needs IMF
Photo by David Cardinal



Dr. Pierre Lemieux, from Sandflow took us through some more of the details of the new Interoperable Master Format (IMF), although at warp speed. The key elements (or "essence") of a film are wrapped into a manifest that allows a single master to produce many, if not all, of the many different versions that studio needs. In addition, common frames like test images don't have to be included in each master, but only need to be referenced.

Very roughly, an IMF starts with a packing list (manifest) of source material, and a composition play list (CPL). Each is encoded in XML, as are Output Profile Lists (OPLs) which specifies how the CPL is used to create a set of track files from the source information. Pierre explained that XML was the clear choice for encoding, but that the actual packing and distribution was left up to specific implementations. To assist in extensibility, the format also allows for supplemental asset lists.

John Hurst, CEO of Cinecert, took the discussion further into the future, explaining how IMF files might eventually be delivered directly to consuming clients, instead of having to be transcoded for specific output formats in advance. For example, Internet vendors like Google, Apple, and Microsoft might eventually get a single IMF version of a movie that they in turn could render and play as needed, rather than needing to get final output versions.

The key to this is yet another file container format, Material eXchange Format (MXF), which is the structure used for organizing the source data for a movie or other IMF product. By implementing systems that can understand MXF, companies can in principle do their own version creation as needed directly from the IMF-version. John took attendees through the basic process of creating the needed code snippets to parse MXF and begin to build an IMF reader.

In response to audience members, the panelists explained that IMF itself wouldn't typically be used for Internet-based content delivery, but that by using it upstream it could improve the workflow of distributors and ultimately the viewing experience for users.

-- David Cardinal

ETIA: Creating content for the Internet

This morning's Creating Content session featured three different perspectives on options for the future of Internet-delivered entertainment content. Ann Greenberg, CEO of startup Sceneplay, is focused on the radical notion of users adding substantial value to entertainment products interactively. She is working not just to allow audiences to help direct the action of an experience, but contribute major original pieces. This "democratization" of cinema has been a vision of Ann since she was young, and clearly she is excited that the combination of the internet as a delivery system and metadata embedded in content to make it more intelligent are making it a reality.
ETIA Panel on Creating Content for the Internet
Photo by David Cardinal
 Carl Rosendahl, animation pioneer, has also left the traditional cinema business, and is focused on working with his CMU students to help them create radical new entertainment products. Peter Hirshberg, as befitting a veteran of the computer industry, is focused on how the "Internet of Things" can change the future of entertainment. He cited statistics that 25% of all tweets are about television, and that apparently 95% of discussion of TV is on Twitter. The result is first that social media is amplifying traditional media, rather than displacing it. It has also meant that the social media component of major entertainment franchises have become as important as their "on-air" communication.

Often of course, viewers are participating in both at once. As many as 77% of TV audiences have a second screen in use while they are watching. Hirschberg's vision is to harness these same technologies to create immersive experiences, instead of continuing to view the world through screens. His vision of the city of the future places the Internet and audiences front and center in creating their own entertainment.  He provided examples of large-scale interactive "entertainment" experiments, like large displays driven by city-wide energy use, or other "audience participation" real time art projects.

Panelists agreed that the Internet allowed content to focus on storylines, freeing entertainment up from being limited to a specific location like the theater or a specific technology like video. More dramatically, it clearly opens up new opportunities for audiences to not only interact with their entertainment, but to help shape and even create it.  -- David Cardinal