Film-Tech Cinema Systems
Film-Tech Forum ARCHIVE


  
my profile | my password | search | faq & rules | forum home
  next oldest topic   next newest topic
» Film-Tech Forum ARCHIVE   » Operations   » Digital Cinema Forum   » Live-playing all the time...viable? (Page 1)

 
This topic comprises 3 pages: 1  2  3 
 
Author Topic: Live-playing all the time...viable?
Mike Blakesley
Film God

Posts: 12767
From: Forsyth, Montana
Registered: Jun 99


 - posted 06-02-2017 05:53 PM      Profile for Mike Blakesley   Author's Homepage   Email Mike Blakesley   Send New Private Message       Edit/Delete Post 
Right now we're in a situation where I will have to "live play" our movie from its shipping drive, and I had the thought: Why not just "live play" all the time? It would save wear and tear on our server drives, and would be a faster process than ingesting the content.

Your thoughts?

 |  IP: Logged

Frank Cox
Film God

Posts: 2234
From: Melville Saskatchewan Canada
Registered: Apr 2011


 - posted 06-02-2017 06:04 PM      Profile for Frank Cox   Author's Homepage   Email Frank Cox   Send New Private Message       Edit/Delete Post 
Unless your server works differently than mine I don't think you could set up a playlist with your trailers and everything and then just let 'er rip. (On further thought, maybe you can; I've never looked at the "feature" list on the contents menu when I've had a distribution drive plugged into the server so I don't know if the movie actually shows up there or not.

Plus I suspect that those distribution drives aren't intended for long term use, not to mention the beating around that they get during frequent shipping, so they might be a lot more failure-prone than a drive that's intended for continuous use and doesn't get banged around all the time.

But what do I know...

 |  IP: Logged

Monte L Fullmer
Film God

Posts: 8367
From: Nampa, Idaho, USA
Registered: Nov 2004


 - posted 06-02-2017 07:23 PM      Profile for Monte L Fullmer   Email Monte L Fullmer   Send New Private Message       Edit/Delete Post 
I had three houses run for over a month with four shows each on Live Play due to my storage drives were acting up...and these were doing the USB sled option, not directly from the CRU bay since I don't have the 2000AR units, but the SX3000 IMB units.

Plus, I have to leave these units on 24/7 due to VPF and DCI requirements. Can't shut them down every night.

I just built the content from the added drives in the SPL and they ran fine.

One could could easily do live play for it's not hurting the system in any way.

 |  IP: Logged

Frank Angel
Film God

Posts: 5305
From: Brooklyn NY USA
Registered: Dec 1999


 - posted 06-03-2017 12:21 AM      Profile for Frank Angel   Author's Homepage   Email Frank Angel   Send New Private Message       Edit/Delete Post 
i would think playing "live" from a shipping drive would be adding a bit more of a higher risk factor...in film parlance, it would be akin to running film from batted-up shipping reels ...well sort of. I would trust my own server and hard drives that I run maintenance and assessment checks on regularly and can know if there is trouble brewing, while a shipping drive's condition is pretty much a crap shoot until it ingests properly.

These drives that DCPs are reused, yes? I mean, they don't use out-of-the-box new hard drives for each release, do they? I mean that would be nice -- when we do shows with wireless mics we ALWAYS put in a fresh battery EVERY show. Sure, chances are there is plenty of juicy to do 10 more shows, but are you going to be the tech who's is responsible for Mel Torme's mic dying for want of a new 99 cent battery?

Then again, given how much they claim they save on producing film prints and 35mm shipping costs, I would think that using a new hard drive would be a drop in the bucket compared to what it used to cost them for film distribution.

If I know I am getting the DCP on a spanking new hard drive, I would feel a little less apprehensive about playing "live." Just like when you get a print that you can see is on brand new shipping reels, you might feel comfortable playing it off those shipping reels and be reasonably certain is not going to bite you in the arse in the middle of a show. I guess it's just a risk assessment you make.

Might I ask, why would you consider doing it this way (assuming you can (I would assume DCi will all it's Big Brother demanding to make you run your booth the way they want you to, would have fairly strict protocol about how a film is to be ingested and played -- although they don't seem to have much to say about which way the sheet hits the fan in every direction when it comes to aspect ratio uniformity/presentation.

 |  IP: Logged

Brad Miller
Administrator

Posts: 17775
From: Plano, TX (36.2 miles NW of Rockwall)
Registered: May 99


 - posted 06-03-2017 12:32 AM      Profile for Brad Miller   Author's Homepage   Email Brad Miller       Edit/Delete Post 
quote: Frank Angel
Then again, given how much they claim they save on producing film prints and 35mm shipping costs, I would think that using a new hard drive would be a drop in the bucket compared to what it used to cost them for film distribution.
I think they are too busy spending FAR more money on each release by making a dozen different DCP versions, paying for KDM creation on each one for each screen plus the 80 different trailers each movie apparently now has to have for marketing. [Roll Eyes]

 |  IP: Logged

Frank Angel
Film God

Posts: 5305
From: Brooklyn NY USA
Registered: Dec 1999


 - posted 06-03-2017 01:49 AM      Profile for Frank Angel   Author's Homepage   Email Frank Angel   Send New Private Message       Edit/Delete Post 
Hey, they're the ones that made that bed that they have to sleep in!

 |  IP: Logged

John Thomas
Film Handler

Posts: 75
From: Boston, MA
Registered: Sep 2011


 - posted 06-03-2017 04:04 AM      Profile for John Thomas   Email John Thomas   Send New Private Message       Edit/Delete Post 
Even if you're looking at the SMART parameters for each distribution drive (distributors should be doing this prior to shipment) there is no telling what each drive has been through during shipment and otherwise.

Here is a link to a whitepaper from some folks at Google on the topic of disk failure:

https://www.usenix.org/legacy/event/fast07/tech/full_papers/pinheiro/pinheiro.pdf

While there are a few interesting takeaways from the study, I think the most crucial is that in the context of a single drive even SMART parameters will not reliably predict imminent failure. Therein lies the value of the redundancy of a healthy RAID.

Unless your RAID is in bad shape you are taking a risk with live play, in which case the responsible solution is to make your RAID healthy again.

 |  IP: Logged

Dennis Benjamin
Phenomenal Film Handler

Posts: 1445
From: Denton, MD
Registered: Feb 2002


 - posted 06-03-2017 04:51 AM      Profile for Dennis Benjamin   Author's Homepage   Email Dennis Benjamin   Send New Private Message       Edit/Delete Post 
I guess everyone is having RAID issues now that we've hit that 5 year mark?

 |  IP: Logged

Steve Guttag
We forgot the crackers Gromit!!!

Posts: 12814
From: Annapolis, MD
Registered: Dec 1999


 - posted 06-03-2017 06:46 AM      Profile for Steve Guttag   Email Steve Guttag   Send New Private Message       Edit/Delete Post 
I've seen an up-tick on RAID issues. One of the things we do for our customers is check drive status periodically and I've noticed quite a bit of increase in reallocated sectors and errors listed in the SMART logs (regardless of server brand). So yes, 5-years of 24/7 seems to be about the useful life. As far as outright stop the show failures, that is still very rare.

Note, those LMS systems too are going to start dropping drives.

We are using this opportunity to offer larger drives since when these systems that are 5-years old had 2-3TB of storage can now have 4-6TB (or more, one sub run customer is at 12TB of storage per server since they have to "harvest" content while it is in release so they'll have it when they play it, including previews). In fact, getting 1TB Enterprise rated drives is getting more difficult. HGST, the best drive company I've come across, only has one 1TB drive and it is to support legacy systems. Western Digital "Gold" series has a viable 1TB drive too but it will throw out an error on SMART parameter 16, depending on your server brand (Doremi has a script to correct for it).

 |  IP: Logged

Leo Enticknap
Film God

Posts: 7474
From: Loma Linda, CA
Registered: Jul 2000


 - posted 06-03-2017 09:05 AM      Profile for Leo Enticknap   Author's Homepage   Email Leo Enticknap   Send New Private Message       Edit/Delete Post 
quote: Mike Blakesley
It would save wear and tear on our server drives...
I believe that most if not all DCP server RAID controllers have them spinning all the time anyways, so probably not. Consumer or semi-professional RAID applications such as NAS boxes and software-based RAIDs (e.g. Windows 10 Storage Spaces) will spin the drives down after a timeout period in which no access is requested, but that's not the case for hardware RAIDs used in professional server applications.

Agreed with others above - the whole point of having a RAID in a DCP server is so that if one drive fails in action, the show can go on. The live play option is simply a "get out of trouble" option for, say, if the drive arrives two minutes before showtime. I can't see any reason to use it if you don't have to.

 |  IP: Logged

Marcel Birgelen
Film God

Posts: 3357
From: Maastricht, Limburg, Netherlands
Registered: Feb 2012


 - posted 06-03-2017 09:13 AM      Profile for Marcel Birgelen   Email Marcel Birgelen   Send New Private Message       Edit/Delete Post 
I think the downsides to live playing are somewhat obvious and also already mentioned.

Obviously, if you program multiple movies in the same auditorium, live playing has the obvious limitation that you're constantly swapping drives around.

The load on a single drive during playout is far higher than on a RAID array, where the load is distributed across multiple drives. So, this increases the stress on the drive and therefore also the risk of failure. Also, most of those drives in distribution aren't spring chickens, they get reused after their engagement. You could look at the SMART values to see the "mileage" on those drives once you get them, but it will still just give you a marginally trustworthy indication of how well the drive will perform during an engagement.

The obvious risk is the drive failing during the engagement. It's already bad to lose a show, but you risk losing much more of your engagement than just a single show. So, I think you should make a backup of the drive before the engagement.

A possible alternative could be to buy a sufficiently big SSD drive and duplicate the contents of your distribution drive on them. Although all SSDs do have quasi-hard limits on the amount of writes they can handle, they support almost infinite read cycles. Also, a single SSD will easily outperform any normal "rotating rust" RAID, so you don't need to worry too much about performance issues. In the unlikely event of this SSD failing, you still have the original drive as a possible fallback.

quote: Steve Guttag
We are using this opportunity to offer larger drives since when these systems that are 5-years old had 2-3TB of storage can now have 4-6TB (or more, one sub run customer is at 12TB of storage per server since they have to "harvest" content while it is in release so they'll have it when they play it, including previews).
I think that once you go past the 1TB per drive mark, RAID5 actually isn't good enough anymore. You really need to go to a RAID level that offers more protection, like dual-parity RAID6. Unfortunately, the extra parity slice will cost you an additional disk.

I've had quite some RAID5 arrays fail during rebuild, especially on old arrays, where a lot of disks were roughly the same age. Also, with such large disks, the rebuild will take forever to complete, only increasing the timespan where your entire RAID is at risk of total failure and has impacted performance.

Luckily, most cinema servers usually can easily re-ingest their content from somewhere else.

quote: Leo Enticknap
I believe that most if not all DCP server RAID controllers have them spinning all the time anyways, so probably not. Consumer or semi-professional RAID applications such as NAS boxes and software-based RAIDs (e.g. Windows 10 Storage Spaces) will spin the drives down after a timeout period in which no access is requested, but that's not the case for hardware RAIDs used in professional server applications.
There's quite some debate on weather spinning up and down is doing disks more damage than just letting them spin.

My personal experience is that disks that spin 24/7 will usually last longer. Still, there's a lot of controversy in the broader industry at large.

There's another thing though. A busy drive, on average, will almost certainly wear out quicker than an idle drive. Besides letting the platters spin, there's also another aspect. A drive that needs to perform a lot of reading and/or writing also needs to move its heads much more often, which will wear out the actuator.

Also, a drive writing a lot, will need to demagnetize and magnetize the platter surfaces a lot more, which will also lead to wear.

 |  IP: Logged

Mike Blakesley
Film God

Posts: 12767
From: Forsyth, Montana
Registered: Jun 99


 - posted 06-03-2017 04:33 PM      Profile for Mike Blakesley   Author's Homepage   Email Mike Blakesley   Send New Private Message       Edit/Delete Post 
quote: Dennis Benjamin
I guess everyone is having RAID issues now that we've hit that 5 year mark?
We are indeed having a RAID issue but interestingly, the trouble is in the "box" rather than in one of the drives (apparently...they all show as healthy).

 |  IP: Logged

Adam Martin
I'm not even gonna point out the irony.

Posts: 3686
From: Dallas, TX
Registered: Nov 2000


 - posted 06-03-2017 05:52 PM      Profile for Adam Martin   Author's Homepage   Email Adam Martin       Edit/Delete Post 
In my experience, the GDC interface ALWAYS shows all drives as OK. Then you send GDC logs and they say "you have a bad hard drive".

 |  IP: Logged

Marcel Birgelen
Film God

Posts: 3357
From: Maastricht, Limburg, Netherlands
Registered: Feb 2012


 - posted 06-03-2017 06:59 PM      Profile for Marcel Birgelen   Email Marcel Birgelen   Send New Private Message       Edit/Delete Post 
What kind of server have you got? A GDC machine?

The problem could very well be in the RAID controller, but I've also seen issues, were a single faulty drive causes trouble, but those problems aren't correctly being noticed by the controller.

Like Adam already indicated, often, there's something in the logs or in the SMART data of those drives.

The low-tech way to identify the affected drive would be to pull a disk, test the result, if the problem stays the same, put the disk back into the machine, wait until the RAID has been rebuilt, pull the next disk and repeat...

This procedure takes a lot of time and if there's anything on the RAID that's dear for you, you should definitely back it up.

 |  IP: Logged

Scott Norwood
Film God

Posts: 8146
From: Boston, MA. USA (1774.21 miles northeast of Dallas)
Registered: Jun 99


 - posted 06-03-2017 07:02 PM      Profile for Scott Norwood   Author's Homepage   Email Scott Norwood   Send New Private Message       Edit/Delete Post 
Agreed with the others--it will work as long as the "shipping drive" is in good condition and does not need to repeatedly re-read information (causing pauses or stuttering) or fail (causing show stoppage), but there is really no advantage vs. loading the content onto the server's RAID. The "wear and tear" on hard disks from long, sequential reads (as in a cinema server) is minimal.

The RAID 5 issue is that disk capacities have increased, but the unrecoverable bit error rate has not. If a disk fails, the data will still be protected (and the show will continue), but the RAID may not be able to rebuild itself when the bad drive is replaced. This is because, statistically, the remaining drives will likely encounter an unrecoverable error in the process of rebuilding the array (which requires reading every bit on every disk successfully). The problem relates to the size of the RAID (in total), not the capacity of the individual disks, but a RAID5 over 12TB using normal SATA disks is asking for trouble. With a cinema server, this is not a big problem, since one can just replace the bad disk and wipe and reconstitute the array (and reload the content), rather than having to allow the RAID to rebuild. Yes, RAID6 is a better approach that allows for much larger disks to be used before RAID rebuilding becomes a problem.

Personally, I hope that future cinema servers avoid the RAID5/RAID6 thing entirely and just use larger disks (4TB or 6TB) in a RAID1 (maybe in a 3-way mirror, for extra safety), which is simpler, more reliable, and safer to implement in software (as opposed to requiring a hardware controller).

 |  IP: Logged



All times are Central (GMT -6:00)
This topic comprises 3 pages: 1  2  3 
 
   Close Topic    Move Topic    Delete Topic    next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:



Powered by Infopop Corporation
UBB.classicTM 6.3.1.2

The Film-Tech Forums are designed for various members related to the cinema industry to express their opinions, viewpoints and testimonials on various products, services and events based upon speculation, personal knowledge and factual information through use, therefore all views represented here allow no liability upon the publishers of this web site and the owners of said views assume no liability for any ill will resulting from these postings. The posts made here are for educational as well as entertainment purposes and as such anyone viewing this portion of the website must accept these views as statements of the author of that opinion and agrees to release the authors from any and all liability.

© 1999-2020 Film-Tech Cinema Systems, LLC. All rights reserved.