Thursday, October 25, 2012

Tech Committee Focused on Cinema Sound


Sony Pictures Entertainment’s Brian Vessa provided an overview of the work of SMPTE’s B-chain study group and urged participation in the resulting technical committee, 25CSS, which he said is the first SMPTE committee solely focused on cinema sound. This presentation was held 25 October during SMPTE’s conference and exhibition.
The current standards for calibration of the sound produced in cinemas are based on work started in the 1970's employing acoustical real-time analysis of pink noise injected into the cinema sound system at the normalized output stages termed the B-chain.
A B-chain study group was formed in March 2010 to examine and make recommendations surrounding these standards as they are impacted by current technology. Findings suggest that performance in this area varies greatly and overall quality has a lot to do with the skill of the technicians.
A SMPTE technical committee has been formed and proposed activities include formation of an ad hoc group aimed at creating recommended practices for measurement and calibration of B-chain systems, standardizing a digital pink noise test signal, and a study of the B-chain of immersive new sound systems such as Dolby Atmos, Barco's Auro 3D and Iosono.
Vessa, who chaired the study group, urged involvement TC-25CSS. “We have to get everyone in the audio community together if we are going to make a difference,” he said, adding that this includes the participation of mixers, sound designers manufacturers and exhibitors.
The group is already starting to liaison with AES and other organizations.
The first official meeting will be held Dec. 6.

The picture tells the story at SMPTE?


On the last afternoon of the SMPTE Conference, an interesting insight into the practicality of UHD-1 (aka 4k or Ultra-HD) came from a French consortium of companies, the ‘4ever project.  This is investigating how practical it will be to broadcast it on terrestrial television in France.    

The case for broadcasting UHD-1 has, they suggested, two pillars:  more information in the picture – the picture now tells more of the story, and more emotional involvement by viewers.  

Their project uses (because it is the only form available) the ‘Quad HD’ format, with an interesting new audio idea.   They plan for 8 audio channels, which will be used as an ‘adaptive’ system rather than with a fixed number of speakers around the room.  The system can adjust itself to the speaker configuration the viewer has in his room (some similarity to the Cinema Atmos system here).  

They find there are today three cameras available – from Red, JVC, and Sony.  None are perfect for day to day television production, but the JVC image seems likely to allow the greatest compression. 

They have looked at the new compression technology, HEVC, which they categorized as an encoder with a gain of bit rate of 50% and a gain of complexity of 10x.    Decoder complexity increases by 2-3x. 

The companies in the group  argued  in MPEG for the inclusion of 10 bit/sample in the first  HEVC specification to be issued net year, and they believe they have won the battle.

Their initial finding is that the HEVC compression will put UHD-1 capacity well within reach of digital terrestrial television, and that in France it will be possible to add new terrestrial services using DVB-t2, UHD-1, and HEVC in the coming years. 

Do you think they will be the first?
David Wood   

Pirates of the Olympics?



The Olympics had a dedicated session on the last day at the SMPTE conference.  This blogger reported on the 3D coverage in an earlier blog, but other interesting issues came up in the session.  

 The practical details of the NBCU ‘multiscreen’ services were given.  This is the services provided to mobiles, PCs, and tablets.  The production and distribution facilities were massive, and the results for users were creative and elegant.  For example, tablet users could watch the live feed on the left and more information (or ads) on the right.   How much has the environment changed since the last Games because of the massive use of Tablets today!   The NBCU service was a real ‘tour de force’.    But just for 17 days?

Even more novel information came from the presentation on the copy protection measures that NBCU took to protect their content from piracy during the games.  This information is difficult to find, so you should download the presentation if you want more. 

NBCU paid 1.18 billion dollars for the rights alone to show the Games in the US, so you could imagine they would pay more than scant attention to protecting the content.   In brief they derive a digital ‘fingerprint’ for each short segment of the content, which is a digital word derived from the segment, and  which would be is different from that for any other segment.

This is used to check whether anyone is illegally streaming their content, by deriving the fingerprint for any suspected content, and comparing them.  If they match, they know they have a pirate.  They ask the portal to take down the segment.  They have a team of ‘internet investigators’ who monitor the web 24/7.  

I suppose this will be the ‘shape of things to come’ for much content in the years ahead.  Do you agree?

David Wood

Wednesday, October 24, 2012

A little bit more on clouds and stuff...


As the last blog explains, if you’ve looked at clouds from both sides now,  you would have enjoyed the whole half day of presentations about them.  There are evangelists for clouds and sceptics.   One of the main (and best) evangelists is Al Kovalick, who managed the session. 

 If you are doing nothing about clouds yet, it was easy to feel inadequate after, for example, hearing about a new Amazon service ‘Glazier,’ promising a level of content security beyond that achievable in your own building.  Content security has been one of the main brakes on the wider use of clouds for professional content.

 Al held that everyone should be asking their suppliers when they will be offering a product that can run in the cloud, and how much the product has been tested in the cloud.  

From where your blogger sat, there were nevertheless in the audience muttering dissenters who remain cautious about whether the time of clouds has finally come.      

Another ‘world changer’ idea came from Gary Demos today.   His presentation was the ‘unfolding merger of television and movie technology’.   I think this amounted to the hypothesis that the television industry should change to the ACES system used by the movie industry, which would allow higher quality to be preserved in the production chain.   In principle its true, but my guess is that it’s a little unlikely – for example many broadcasters believe that in future television programme production in a progressive scanning world will move down from 4:2:2 to 4:2:0, rather than ‘up’ from 4:2:2 to the movie production 4:4:4.   

What do you think?

A Cloudspotter’s Guide To Migration


“This is big business, and if we think we are the tail wagging the dog. We are wrong,” Al Kovalick (Media Systems Consulting) said of the cloud.
He cited (IDC) research that projects that the web apps market will reach $40.5 billion by 2014. And in 2012, 80% of all new applications would be designed to run in the cloud.
Asserted Kovalick: “We need to leverage it; we need to migrate to the cloud.”
Production collaboration represents one opportunity, as the cloud could provide applications including ingest, download, MAM, review and approval, storage/archive, editing, transcoding and rendering.
What is still missing is real time control and capabilities to support live events “especially when it is 4K,” Kovalick said, adding that the low-hanging fruit includes HR services, billing, traffic/scheduling and MAM.
“Just about anybody who makes a product could put it in the cloud [now],” Kovalick believes, though he pointed out that current business models encourage ownership rather than a pay-as-you-go model. “That is why you haven’t been seeing it. It’s too cheap, it’s too affordable. The manufacturers can’t survive (with this model). I think this will change, somewhat slowly.”

In close and personal – the 3D Olympics


3D is certainly alive!  There were many interesting presentations on 3DTV, but none more so than Jim DeFilippis’ presentation on 3D at the 2012 London Olympic Games.  Just nine months before the games were to begin, a decision was taken to go for extensive coverage in 3D – 18 different sports were slated, and a TV channel dedicated to 3DTV.  The companies OBS, NBCU, BBC, Sky Italia, Foxtel, and Panasonic got together and made the impossible happen.

Jim’s visual 3D material from the Games is eye-popping, and a must see for broadcasters yet to be convinced about 3D.   3DTV is notorious in needing ultra care to match perfectly the left and right eye images, and for care between scene cuts.  Imagine having to provide the 3D channel, live, with however many different cameras from different companies?   They did it.

The games brought lots of experience about which type of sports events are ‘3D-friendly’ and which are less so. 3D works best when the main object in the scene is close to the camera – the natural 3d effect is most striking and natural when the principal object in the scene is within a few metres of us, and the same applies to 3D TV.   When you see the material it is clear that some sports are an absolute gift for 3D, such as ‘diving’ and ‘gymnastics’.   Others with teams and a larger canvas are less effective in 3D. 

Some interesting rules about 3D programme making also emerged from the 3D content – maximum disparity tolerances of-1% and +4%, and captions at +0.5% looks best.  

Overall,  the title of this blog to get the camera ‘in close and personal’ is probably the number one macro rule for making effective 3DTV.

FCC Update on IP-Delivery of Closed Captions


Earlier this year the FCC adopted rules for IP delivery of closed captions in video programming and these regulations are starting to go into effect. Wednesday at the SMPTE conference, the FCC’s Alison Greenwald Neplokh provided an overview of what content owners, distributors and manufacturers need to know.
The basics state that the content owners must provide captions to the distributors, the distributors must pass through closed captions, program providers must ensure that the end user can view the captions, and the physical devices (including bundled software) must ensure that end users can view the captions.
The FCC adopted SMPTE Timed Text (SMPTE-TT) as a safe harbor.
Closed captions need to be a priority for manufacturers if they are updating devices with new features. For available devices, waivers or exemptions are options if certain conditions exist. For instance, device makers can get an exception if meeting the requirement proves technically infeasible (the device can’t be modified with "reasonable effort or expense") or if support for screens smaller than 13 inches is not achievable. Manufacturers could also get a waiver if the device is designed primarily for purposes other than receiving and playing back programming.
Some outstanding petitions could result in revisions to the FCC rules. For instance the Consumer Electronics Association is asking that the FCC rules not cover removable media players, and Telecommunciation for the Deaf and Hard of Hearing wants the regulations to include video clips (including news).

Tuesday, October 23, 2012

SMPTE Exhibition "Excellent As Always"


An encouraged Amberfin CTO Bruce Devlin took some time out from the SMPTE exhibition—which opened 23 October and will be open through Thursday—to talk about some of the progress he is seeing in adoption of file-based workflows and the SMPTE standards that support them.
“SMPTE is so important in getting vendors’ [technologies] to talk to each other in a cooperative environment for the benefit of media companies,” Devlin said.
He was encouraged to hear SMPTE 436—which supports carriage of VBI and VANC in MXF—discussed on Tuesday in a conference session. And he noted that Wednesday’s presentation of SMPTE Timed Text Format (SMPTE-TT), a caption interchange standard, is another important topic on the agenda.
AmberFin is on the exhibition floor showing its iCR, which has implemented SMPTE 436 and SMPTE-TT.
Also exhibiting this week is Digital Rapids, which is on hand with Kayak, its new workflow platform; and StreamZ Live Broadcast, a live encoder developed to support both multi-screen streaming and broadcast operations. “Excellent as always,” said Digital Rapid product manager Darren Gallipeau of SMPTE’s exhibition. Like Devlin, Gallipeau said he looks forward to exploring file-based workflows this week.
The list of 77 exhibitors also includes Cinegy, which is on hand with the latest version of its Cinegy Multiviewer as well as the new Cinegy Player, a standalone tool for viewing broadcast files. At SMPTE, the company is distributed complimentary Cinegy Player licenses.
The exhibition will be open Wednesday from 10:30-18:30, and Thursday from 10:30-14:00.

“If you can’t find it, it’s not worth a damn thing”



Thus spoke Darryl Jefferson, responsible for the incredibly successful US Olympics coverage at NBCUniversal, at the SMPTE industry lunch on 23 October 2012.    His reference was to the need for meticulous metadata labelling of all the material they had – something absolutely symptomatic of the new age of file based programme production.

We didn’t really need the statistics from Darryl because everyone knows that the Olympic Games in summer 2012 were the most successful television event in the history of mankind – but we got them.   

What can you say about 82 million television viewers (just in the US)?  Maybe even more phenomenal than the television audience, which we might expect to be high, was the huge US Internet audience.  There were 57 million unique visitors, and at times the Olympic streams were consuming 35-50% of the entire US internet bandwidth.    3000 highlight packages were produced.   

There was 2500 staff working in London, with 200 cameras providing pictures in addition to those from the IOC OBS. 

Part of the production success was the use of a Media Access management system, which allowed the NBCU teams in New York, Florida (for the Spanish services), and elsewhere to find and call up items instantly from the available material.  

Darryl claimed that the event showed ‘Television is not dead’.  Maybe so, but it appears to have an ever growing large room-mate. 

Any video content from the dawn of time at the touch of a button?



Anthony Wood, CEO of ROKU, gave the opening keynote on the first day of the SMPTE conference (23 October 2012).   Anthony is a successful and charming Silicon Valley entrepreneur, complete with the handyman uniform of sneakers, worn jeans, and suspenders (see photo).  The term ROKU comes from ‘six’ in Japanese, by the way.  It was Anthony’s sixth company.

His basic point was that the future of television will be via Internet streaming, rather other delivery means.   This is the service that the ROKU player or set top box provides at an equipment retail cost of just 50 USD.  It provides a ‘gateway’ to the services of other content providers such as Netflix.  It is a competitor of other gateway boxes from Apple TV and others. 

The ROKU service will also be available via a ‘dongle’ for insertion into the USB port of a TV set.  This will be a disposable dongle, which can be updated with new and more sophisticated software as time goes by, without the viewer having to wait for his next TC set.  

ROKU offers access to a large number of services, which fall into popularity bands like a ‘long tail’.   Top gun at the moment is Netflix. 

Anthony was asked what broadcaster stations should be doing to continue to be successful in a future age of Internet gateways like ROKU.  He suggested they should be working out how to gain the rights to include premium content in their Internet services, which should also be offered via an Internet gateway as well as broadcast.  It seems that much the right to much premium content today resides with the networks, rather than the broadcasters who use network content for broadcasting.    

So what is the ‘end point’ of the current trend?  For Anthony it will be a world where “any video content in history is available via Internet”.  

Do you think he is right?

David Wood. 

Monday, October 22, 2012

SMPTE Planning to Create HFR Test Material


CineCert’s John Hurst, co-chair of SMPTE 21DC, kicked off an update of the technology committee’s work by reporting that the bulk of DCI standards have now been published but this current platform is “just beginning of what is possible” in cinema.
MKPE Consulting’s founder Michael Karagosian—who co-chairs 21DC’s HFR study group with Kommer Kleijn and David Stump—said the group is focusing on HFR mastering and distribution through exhibition.
He reported that 428-7 is a subtitling spec that the group aims to finalize for 3D applications—though it is not clear how HFR subtitles will be accomplished. Karagosian added that the The Hobbit (which opens Dec. 14) is not expected to offer subtitles on its HFR versions.
The study group is also planning a shoot to create HFR test material, which would be made available to all vendors to test developing HFR technology for production, postproduction and exhibition.
Director of photography David Stump is creating a shot list for this production.

HFRs Challenge Postproduction


Paul Chapman, senior vp of technology at Burbank-based postproduction facility Fotokem, led a discussion of some of the challenges that high frame rates bring to post, during SMPTE’s symposium on high frame rates for digital cinema.
For starters, he asserted that support from creative editorial is lacking and needed. This was a sentiment echoed by additional speakers, including Disney’s Howard Lukk, who did however single out Adobe’s Premiere Pro as offering HFR support.
Chapman reported that he conducted an informal survey of dailies vendors, and his findings suggest that all are now testing high frame rates. He expects to see quite a lot of development in this area during the next 12 months. He reported that Fotokem’s NEXTlab near-set dailies system is already working on a HFR project.
Nico Recagno of SGO (the maker of the Mistika postproduction system, which is in use at Park Road Post on The Hobbit) said that managing dailies at HFRs is “horrifically difficult,” citing challenges including time code, methods of viewing and QC, and sound sync.
Recagno added that the biggest concern that he is hearing from colorists is getting proper calibration during color grading.
He and Chapman both urged the community to create HFR standards in some of the discussed areas, including time code.

The Trumbull campaign for quality.



The key–note presentation for the ‘higher frame rate’ symposium was given by the respected Douglas Trumbull – filmmaker, visionary, and more.  Douglas explained that his father was an Engineer and his mother was an Artist.  (You blogger wonders if that makes him an Artineer?).  He argued that cinema had to become a more immersive experience, if audiences were to be held and regained.   Brighter screens and more are needed, so the audience becomes ‘part of the movie’. 

Douglas’ revealed his personal history – his triumphs such as the cinematography for the movie 2001, and his disappointments – the fallout from the death of actress Nathalie Wood while making his movie ‘Brainstorm’.   

One of his achievements was the idea for the system ‘Showscan’ which offered 70mm film shot at 60 pictures/second.   It was this, he explained, that led to his lifelong commitments to raising the quality bar for movies.

He had a basic concept to offer.  This was that different types of scene, and different components of a scene, need different picture rates.  A future system should allow individual elements or segments of pictures to run at different picture rates.   In other words, the picture rate becomes a component of the production grammar, like colouring or contrast.   This would create the most efficient and highest quality delivery system.   To make this work, the movie could all be shot originally at, say, 120 pictures per second, and individually elements could be ‘sub-sampled’ to their optimum rate.   It’s an intriguing concept if it can be made to work.  

Douglas has also being working on using higher picture rates for 3D movies – apparently a 3D movie shot at 120 pictures per second is a great experience.

His keynote was a very thoughtful one, and his commitment to high quality is a breath of fresh air.  Outside the meeting room, your blogger asked him if he might work with the 33 Megapixel ‘Super Hi Vision system’ (ITU UHD-2), at the 120 pictures/second it may offer in future.    Could we ever see a Super HiVision Trumball remake of his 2001 or Brainstorm movies?  Wouldn't that be something.

David Wood