Film-Tech Cinema Systems
Film-Tech Forum ARCHIVE


  
my profile | my password | search | faq & rules | forum home
  next oldest topic   next newest topic
» Film-Tech Forum ARCHIVE   » Operations   » Digital Cinema Forum   » current time in cinema server

   
Author Topic: current time in cinema server
Wojciech Gorka
Film Handler

Posts: 1
From: Katowice, slaskie, Poland
Registered: Jan 2016


 - posted 02-05-2016 03:24 PM      Profile for Wojciech Gorka   Email Wojciech Gorka   Send New Private Message       Edit/Delete Post 
Is there any possibility to obtain information about currently displayed film (macro) from cinema server. I want to know which is current hour/minute/second of active macro. TMS servers get this information from cinema servers somehow. Could I possibly get this information using SNMP? Or is there any standard for communication between TMS and the cinema servers?

 |  IP: Logged

Mark Gulbrandsen
Resident Trollmaster

Posts: 16657
From: Music City
Registered: Jun 99


 - posted 02-05-2016 04:30 PM      Profile for Mark Gulbrandsen   Email Mark Gulbrandsen   Send New Private Message       Edit/Delete Post 
This should be in the servers logs. I do not believe it is something the server outputs though.

Mark

 |  IP: Logged

Carsten Kurz
Film God

Posts: 4340
From: Cologne, NRW, Germany
Registered: Aug 2009


 - posted 02-05-2016 05:42 PM      Profile for Carsten Kurz   Email Carsten Kurz   Send New Private Message       Edit/Delete Post 
There is no communication standard. Every server uses it's own API. TMSs need to implement most/all APIs.

- Carsten

 |  IP: Logged

Harold Hallikainen
Jedi Master Film Handler

Posts: 906
From: Denver, CO, USA
Registered: Aug 2009


 - posted 02-05-2016 08:36 PM      Profile for Harold Hallikainen   Author's Homepage   Email Harold Hallikainen   Send New Private Message       Edit/Delete Post 
A new standard (SMPTE 430-14) provides the edit unit number currently playing (with the first edit unit of the show being zero) along with a bunch of other information over AES/EBU. This is intended to drive external object-based sound rendering systems, seat shaking systems, or whatever.

Harold

 |  IP: Logged

Carsten Kurz
Film God

Posts: 4340
From: Cologne, NRW, Germany
Registered: Aug 2009


 - posted 02-06-2016 05:05 AM      Profile for Carsten Kurz   Email Carsten Kurz   Send New Private Message       Edit/Delete Post 
Hi Harold,

is that compatible/identical with the ATMOS TC?

Is that audio-encoded, or does it make use of AES status bits?

- Carsten

 |  IP: Logged

Harold Hallikainen
Jedi Master Film Handler

Posts: 906
From: Denver, CO, USA
Registered: Aug 2009


 - posted 02-06-2016 01:18 PM      Profile for Harold Hallikainen   Author's Homepage   Email Harold Hallikainen   Send New Private Message       Edit/Delete Post 
Atmos sync (ST0430-12-2014) uses Frequency Shift Keying of an audio tone on the sound track. It is recorded on the distributed sound track. Along with other information, it includes the currently playing edit unit number of the reel. Since it is recorded on the sound track, it cannot be the edit unit of the show (since they do not know what your show will consist of during the encode process). It also would be difficult to make it be the edit number of the composition since the CPL can juggle reels and entry points. A device receiving the code also needs the CPL (ST0429-7-2006) or RPL (ST0430-11-2010) to determine where we are in the show or composition. An RPL can represent an entire show or a composition. Systems have recently been moving towards an RPL representing a composition instead of a show. I noticed this transition as different frame rates were introduced since it is not fun to calculate what edit unit you're on when the rate varies through the show.

Anyway, Atmos uses SMPTE 430-12 sync. SMPTE 430-14 covers sync and auxiliary content transfer. The sync part is based on an implementation I originally did. Positive numbers are binary encoded in the lower 16 bits of the 24 bit audio words. Putting it in the low 16 bits prevents speaker damage should it accidentally be routed to a speaker (since the peak level is -48dBFS). Also, every encoded value is followed by its negative (24 bit twos complement). The negative values result in a net zero DC offset, again protecting speakers. It also provides a fair amount of error detection since each number is sent twice (once negated). The sync includes a start of edit unit sync word, a flag word indicating whether playback is running, paused, etc., an edit unit count with zero being the first edit unit of the SHOW (not the composition or reel), and a bunch of other information about what is being played (audio and image UUIDs, etc.). The sync signal is server generated instead of being recorded in the sound track. The transfer protocol uses an HTTP request by an external device for the auxiliary data track for the specified show edit unit(s) (the URL includes the desired edit unit and a count of how many successive edit units to send).

The FSK sync has the advantage that it can survive sample rate conversion and gain changes. It is a bit inefficient (which is why I proposed a binary sync) since it takes something like four 24 bit samples (96 bits) to send one bit. Some information is spread across several edit units since it will not fit in one.

ST430-12 allows the FSK sync to be encoded in the sound track or server generated. No matter where generated, it represents the edit unit number in the reel. On reel transitions, there's a jump in edit unit number.

ST430-14 has to be server generated. There are no jumps in edit number. It increases monotonically through the show.

Harold

 |  IP: Logged

Carsten Kurz
Film God

Posts: 4340
From: Cologne, NRW, Germany
Registered: Aug 2009


 - posted 02-06-2016 02:07 PM      Profile for Carsten Kurz   Email Carsten Kurz   Send New Private Message       Edit/Delete Post 
Thanks for sharing this.

Hmm, If I'd understand all these implications, I probably would not need to ask - why not simply implement a TC+Aux protocol over IP? Realtime/granularity issues?

I assume, ATMOS TC has/needs provisions for sync play down to the audio sample once locked (for reasons of phase accuracy/locked operation), hence the need to have a locked audio/AES based TC?

BTW - a recent Sony firmware update enabled LTC output from their 515. Funny, NOT through one of it's AES channels, but through a pin on the audio output connector previously unused (single ended analog, 1Vpp). According to the technote, it is SMPTE-12M-1-2008 compliant. The document doesn't mention wether it is CPL or SPL referenced, but it should be frame-synced. The document mentions additional network cues to inform the external equipment about feature name, start/stop count, etc., so the TC does not need to transport this.
Looks like a quick hack to enable '4D' content to me. But having LTC available is nice anyway...

- Carsten

 |  IP: Logged

Carsten Kurz
Film God

Posts: 4340
From: Cologne, NRW, Germany
Registered: Aug 2009


 - posted 02-06-2016 03:48 PM      Profile for Carsten Kurz   Email Carsten Kurz   Send New Private Message       Edit/Delete Post 
Hmm, what about SMPTE 430-10/11? Are these sync methods active only with features that actually contain subtitles/captions?

- Carsten

 |  IP: Logged

Harold Hallikainen
Jedi Master Film Handler

Posts: 906
From: Denver, CO, USA
Registered: Aug 2009


 - posted 02-06-2016 06:08 PM      Profile for Harold Hallikainen   Author's Homepage   Email Harold Hallikainen   Send New Private Message       Edit/Delete Post 
There is a time code plus "aux data" over Ethernet (TCP) for closed captions. The caption data is fetched ahead of time and synchronized with the Auxiliary Content Synchronization Protocol which uses TCP. The external device (like a captioning system) connects to the server on TCP port 4170. The server sends messages and the captioning device responds (Request Response Pairs). For captioning, the messages include an announce message, set lease, set RPL location, timeline update messages, and set output mode (playing or not). There are also a few others, but those are the main ones. The external device receives the RPL URL, then fetches the RPL and parses it. The RPL includes the URLs for the timed text files, so the device fetches those (using HTTP). The latency of TCP can vary widely, especially on a network carrying other traffic. It is good enough for captions (within a frame or two), but not for audio.

There IS the Precision Time Protocol that allows for very accurate timing over Ethernet. It often requires a separate network with PTP enabled switches. There is a SMPTE group working on a standard to use this for synchronization. My feeling is that it is aimed more at television as a replacement for color burst gen lock in a studio. It may find cinema application, but I have not heard much of that yet. From my understanding of the way it works is that it gets each piece of equipment to agree on what time it is VERY ACCURATELY. Another protocol might tell equipment "I'm going to play frame 12345 at XYZ time." It's all in sync because everyone agrees on what time it is.

If the FSK sync has not gone through any sample conversion, it should be sample accurate since there are specific sample values at specific sample numbers. The standard specifies exactly what the beginning of a frame looks like. I think, though, that in Atmos, audio sample accuracy is not required since we are not mixing audio from different sources. The entire object-based mix is generated in the CP850 using data that was received ahead of time over Ethernet. I think the sync only has to be good enough for lip sync, though it is important that the overall speed of audio playback matches the frame rate of the images so you don't get a buffer underrun or overflow. Of course, at 24fps and 48ksps, the object-based audio will have exactly 2,000 samples per frame (per output channel after rendering). You'd better play those samples during a frame's time.

I looked at 12M when considering various sync signals. The tolerances on 12M, as published, would not allow audio sample accurate synchronization. Also, it does not allow for higher frame rates. It does output HH:MM:SS:FF while cinema timing is generally just by frame number (or edit unit number) to avoid having to convert back and forth from "wall clock time." Of course, non-integer frame rates (like for television) make 12M lots of fun (drop frames, etc.). In timed text files (captions and subtitles), times are expressed as HH:MM:SS:EE or HH:MM:SS:CCC. The HH:MM:SS:EE notation is for SMPTE. If the edit unit rate is not an integer, then HH:MM:SS do not correspond to "wall clock time." HH:MM:SS:EE is just a way of expressing an edit unit count. In fact, in the algorithm I wrote, you could say the TimeIn of a caption is 00:00:00:12345 to say the TimeIn is at 12345 edit units. The HH:MM:SS:CCC notation (I don't recall what letter is actually used there, so I'm using C for "click") is what Cinecanvas uses. The CCC varies from 0 to 249 and represent 4ms clicks. Thus HH"MM"SS:CCC DOES relate directly to wall clock time, but on non-integer frame rates the specified time may not correspond directly to the start time of a frame.

It's interesting trying to figure out what an external device needs to know and how to get that information to it. Also, with "outboard media blocks," there's some SMS communications that DCI drawings show but appear to have no standards.

Fun stuff!

Harold

 |  IP: Logged



All times are Central (GMT -6:00)  
   Close Topic    Move Topic    Delete Topic    next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:



Powered by Infopop Corporation
UBB.classicTM 6.3.1.2

The Film-Tech Forums are designed for various members related to the cinema industry to express their opinions, viewpoints and testimonials on various products, services and events based upon speculation, personal knowledge and factual information through use, therefore all views represented here allow no liability upon the publishers of this web site and the owners of said views assume no liability for any ill will resulting from these postings. The posts made here are for educational as well as entertainment purposes and as such anyone viewing this portion of the website must accept these views as statements of the author of that opinion and agrees to release the authors from any and all liability.

© 1999-2020 Film-Tech Cinema Systems, LLC. All rights reserved.