Film-Tech Cinema Systems
Film-Tech Forum ARCHIVE


  
my profile | my password | search | faq & rules | forum home
  next oldest topic   next newest topic
» Film-Tech Forum ARCHIVE   » Operations   » Digital Cinema Forum   » GDC SX2001 RAID HDD - Compatible drives? (Page 1)

 
This topic comprises 4 pages: 1  2  3  4 
 
Author Topic: GDC SX2001 RAID HDD - Compatible drives?
Steven J Hart
Master Film Handler

Posts: 282
From: WALES, ND, USA
Registered: Mar 2004


 - posted 02-20-2018 10:51 AM      Profile for Steven J Hart   Author's Homepage   Email Steven J Hart   Send New Private Message       Edit/Delete Post 
My GDC server was installed in 2011. It's RAID is equipped with Hitachi H3V10003272S drives. I have had no problems yet, but noted that this is an obsolete part number and only refurbished units are available. Are there 3.5" drives currently in production that are direct replacements for this RAID?

Secondly, the OS drive is original. Has anyone here tried cloning that drive to an SSD?

 |  IP: Logged

Leo Enticknap
Film God

Posts: 7474
From: Loma Linda, CA
Registered: Jul 2000


 - posted 02-20-2018 11:55 AM      Profile for Leo Enticknap   Author's Homepage   Email Leo Enticknap   Send New Private Message       Edit/Delete Post 
I'm sure that GDC (us-support [at] gdc-tech.com) could give you a list of drive models that are in production today and that you could use without there being a warranty problem.

One potential gotcha, given that your server dates from 2011. I had to replace one that was first installed at about that time recently, when the motherboard failed. The theatre reactivated their long expired warranty, and GDC shipped a new server. It turned out that the failed one still had a pre-DCI software version in it, and there was no direct LAN connection from the media block to the projector: just from the server. The new one had the current, DCI-compliant software in it, and of course the security manager would not work without the media block having a direct connection to the booth LAN. Because the ethernet cable from the server disappeared into a ceiling void, I ended up having to run out to Office Depot to buy a switch and some cables, install them in the rack with the server, and do some reconfiguration of IP addresses in order to get the replacement server to work.

I'm guessing that if you do replace the system drive with a SSD, if your server is still on a pre-DCI software version, you'll have to update it to a DCI-compliant version at that point, and if the media block does not have its own connection to the LAN, you'll hit the same issue.

 |  IP: Logged

Steven J Hart
Master Film Handler

Posts: 282
From: WALES, ND, USA
Registered: Mar 2004


 - posted 02-20-2018 03:18 PM      Profile for Steven J Hart   Author's Homepage   Email Steven J Hart   Send New Private Message       Edit/Delete Post 
Thanks Leo, I did email GDC about the RAID drives. Awaiting a reply.
On my second question, I am indeed running an older version of the GDC operating system. 7.7x I believe. I’m wondering if it would be possible to remove my current OS drive and clone it to a new SSD in order to prevent a failure in the future. The person who installed my equipment is no longer in the business, and it is difficult (and very expensive) to get service techs to come to my remote rural location.

 |  IP: Logged

Scott Norwood
Film God

Posts: 8146
From: Boston, MA. USA (1774.21 miles northeast of Dallas)
Registered: Jun 99


 - posted 02-20-2018 03:26 PM      Profile for Scott Norwood   Author's Homepage   Email Scott Norwood   Send New Private Message       Edit/Delete Post 
I would strongly suggest just replacing all three drives in the RAID. If they are all from 2011, they are past their life expectancy at this point. And, for best performance, all disks in a RAID5 should have the same geometry and firmware revision.

 |  IP: Logged

Monte L Fullmer
Film God

Posts: 8367
From: Nampa, Idaho, USA
Registered: Nov 2004


 - posted 02-20-2018 03:41 PM      Profile for Monte L Fullmer   Email Monte L Fullmer   Send New Private Message       Edit/Delete Post 
And the drives have to be Enterprise SATA drives.

Can't use regular consumer drives.

Don't think you can use SSD's in these cinema servers.

 |  IP: Logged

Leo Enticknap
Film God

Posts: 7474
From: Loma Linda, CA
Registered: Jul 2000


 - posted 02-25-2018 04:55 PM      Profile for Leo Enticknap   Author's Homepage   Email Leo Enticknap   Send New Private Message       Edit/Delete Post 
quote: Steven J. Hart
I’m wondering if it would be possible to remove my current OS drive and clone it to a new SSD in order to prevent a failure in the future.
You could try Clonezilla-ing it and seeing what happens. If the clone doesn't work, you can always put the original drive back in again.

As for replacing the RAID drives, agreed with Scott: your current ones are on borrowed time. If it hasn't been done recently, opening up the case and blowing all the crud out with a Datavac (or similar) would be a good idea, too, not to mention replacing the media block battery.

While any model of enterprise grade drive will likely work (though with the caveat mentioned by Scott - best to use three of the exact same model), if your server is under a GDC warranty, I'm guessing that they'll have a list of approved models, from which you'll have to choose in order to keep that warranty good. If it isn't, any model of NAS or enterprise-grade drive should work. Given your remote location, it may be worth buying four and keeping one as a spare.

 |  IP: Logged

Marcel Birgelen
Film God

Posts: 3357
From: Maastricht, Limburg, Netherlands
Registered: Feb 2012


 - posted 02-26-2018 02:18 AM      Profile for Marcel Birgelen   Email Marcel Birgelen   Send New Private Message       Edit/Delete Post 
quote: Monte L Fullmer
And the drives have to be Enterprise SATA drives.

Can't use regular consumer drives.

That's nonsense. The first thing you need to make sure is that the drive is compatible with the motherboard.

The second thing you need to make sure is that it's compatible with being used in a RAID, as in, that it isn't running a "green" firmware, which automatically spins down the drive after a certain period of inactivity.

Several reports (like those from Backblaze) have shown that there is not a substantial difference in reliability between the "Enterprise" and "Consumer" SATA hard drives. They're essentially the same drives, the difference is often just the serial number series and often the warranty that comes with it.

For example, the last bunch of "enterprise" disks I bought came with a 3-year warranty, whereas the "consumer" disks are limited to 2 years.

quote: Monte L Fullmer
Don't think you can use SSD's in these cinema servers.
I've seen it work, but it isn't in any way supported by GDC, so it's probably best to refrain from doing it. Unless you really would need the extra potential speed with content ingestion, I don't even see a good use-case for SSD in those servers.

 |  IP: Logged

Leo Enticknap
Film God

Posts: 7474
From: Loma Linda, CA
Registered: Jul 2000


 - posted 02-26-2018 07:50 AM      Profile for Leo Enticknap   Author's Homepage   Email Leo Enticknap   Send New Private Message       Edit/Delete Post 
There can be firmware differences between drives badged as being for consumer and NAS/enterprise. The consumer drives are more likely to have the "green" features you describe, as well as the firmware that will have the drive re-read a sector repeatedly if it gets bad data (fails CRC or whatever) the first time. This is bad news in a RAID, because the flow of data will stop. Drives with RAID-optimized firmware will simply drop out of a RAID if there is a problem reading a given sector, and the RAID controller will then immediately mark it as a bad sector (whether it's actually bad, or because the read error was for another reason) and read the backup copy.

For this reason, I've occasionally pulled a drive from a DCP server that is showing millions of bad sectors, repartitioned and reformatted it on a PC, read the SMART parameters and they were totally healthy, then done a full surface scan and it's come up totally clean - no bad sectors at all.

But from what I can gather, the difference between drives branded as being for consumer or semi-professional NAS use (e.g. WD Red) and drives branded as enterprise (e.g. WD Yellow) is simply the length of the warranty offered. It's the same difference between regular and "super" car batteries. Same battery, different color label on it, longer warranty.

 |  IP: Logged

Marcel Birgelen
Film God

Posts: 3357
From: Maastricht, Limburg, Netherlands
Registered: Feb 2012


 - posted 02-26-2018 10:33 AM      Profile for Marcel Birgelen   Email Marcel Birgelen   Send New Private Message       Edit/Delete Post 
quote: Leo Enticknap
There can be firmware differences between drives badged as being for consumer and NAS/enterprise. The consumer drives are more likely to have the "green" features you describe, as well as the firmware that will have the drive re-read a sector repeatedly if it gets bad data (fails CRC or whatever) the first time. This is bad news in a RAID, because the flow of data will stop. Drives with RAID-optimized firmware will simply drop out of a RAID if there is a problem reading a given sector, and the RAID controller will then immediately mark it as a bad sector (whether it's actually bad, or because the read error was for another reason) and read the backup copy.
You should simply check the specs and if they can be operated within a RAID. In many of those "green" hard-drives, the enhanced power features are actually turned off when you use them in a RAID, but you simply do not want to run the risk to run into such a disk.

The number of retries after a CRC error is a combination of both the hard drive and the RAID controller. If the firmware really behaves differently regarding read-retries and reallocations (if the drive knows it's inside a RAID) is something no hard disk manufacturer was able to answer me in the past.

Now, for the use in cinema servers, I'd actually would like to see a shorter interval on those, at least as long as there are still spare disks. Semi-failed disks often cause latency issues (stuttering images) before they eventually fail completely and are finally rejected by the RAID controller.

 |  IP: Logged

Mark Gulbrandsen
Resident Trollmaster

Posts: 16657
From: Music City
Registered: Jun 99


 - posted 02-26-2018 10:36 AM      Profile for Mark Gulbrandsen   Email Mark Gulbrandsen   Send New Private Message       Edit/Delete Post 
You can get them from GDC or a place like Newegg or even Amazon. GDC no longer stocks the 1TB drives and they will send you 2TB drives. Always replace ALL THREE! You can put in one drive at a time and let it rebuild so you don't loose content, but if you move to 2GB drives the new drives will likely only show up as 1TB drives. So if you make a new RAID be sure to have your black roll and other necessary clipss you use at hand on another drive or USB so you can load them back on.

Mark

Mark

 |  IP: Logged

Mike Blakesley
Film God

Posts: 12767
From: Forsyth, Montana
Registered: Jun 99


 - posted 02-26-2018 01:58 PM      Profile for Mike Blakesley   Author's Homepage   Email Mike Blakesley   Send New Private Message       Edit/Delete Post 
A person should probably just replace all those drives as a preventive maintenance measure. I wonder what a sensible interval would be?

 |  IP: Logged

Mark Gulbrandsen
Resident Trollmaster

Posts: 16657
From: Music City
Registered: Jun 99


 - posted 02-26-2018 02:45 PM      Profile for Mark Gulbrandsen   Email Mark Gulbrandsen   Send New Private Message       Edit/Delete Post 
Well, as an normal preventative maintenance thing you can swap them every three to five years. But you might get more life from them if go by the accumulated SMART errors that each drive logs. There is a free program called WinDFT on the HGST web site that you can use to thoroughly analyze any Hitachi or HGST hard drive. It almost gives one too much info, but it's the logged SMART Errors that are most important. On 3.5" drive servers you can view these errors directly on the server under RAID in Admin, or use the Win DFT program.

Mark

 |  IP: Logged

Marcel Birgelen
Film God

Posts: 3357
From: Maastricht, Limburg, Netherlands
Registered: Feb 2012


 - posted 02-26-2018 04:41 PM      Profile for Marcel Birgelen   Email Marcel Birgelen   Send New Private Message       Edit/Delete Post 
quote: Mike Blakesley
A person should probably just replace all those drives as a preventive maintenance measure. I wonder what a sensible interval would be?
Like with many somewhat complex scenarios, there isn't a single right answer. The only correct answer would be: It depends on your usage scenario. Which doesn't say anything. [Wink]

While a hard drive inside a computer may survive for 10 or more years, heavily dependent on actual usage, there are some more reliable statistics out there for disks that are being used intensively, although most of them are based on 24/7 datacenter usage.

Now, in many cases that comes close to the usage in a regular theater. Many keep their servers spinning 24/7 and if you don't do that, the extra stress on the hard drive motor and platters from spinning down and up will most likely compensate for it.

Most reliable statistics also come from parties that run big datacenters, like Backblaze, who regularly publish vendor-independent statistics about hard drive reliability.

There is a distinctive higher failure rate for hard drives in the first one and a half year of roughly 5% across the board. Drives that survive the first 18 months usually last at least 3 years, because between 18 and 36 months, the failure rate is only roughly 1% a year. After about three and a half year, the failure rate starts to dramatically increase to roughly 11% per year.

So, I think as a rule of thumb for intensive usage applications, which qualifies for most theaters who run 365 days a year, you should consider replacing hard drives prematurely after roughly three years.

Please keep in mind that SMART errors only tell part of the story. Although SMART errors are a good indication of the physical health of the magnetic platters, there are other factors that aren't that easily monitored, like the spindle motor and the voice coil actuator, which are also moving parts which get less reliable by age.

 |  IP: Logged

Scott Norwood
Film God

Posts: 8146
From: Boston, MA. USA (1774.21 miles northeast of Dallas)
Registered: Jun 99


 - posted 02-26-2018 04:59 PM      Profile for Scott Norwood   Author's Homepage   Email Scott Norwood   Send New Private Message       Edit/Delete Post 
Personally, I wouldn't replace the drives on a set schedule. No one does this in data centers, and there is no reason to do it in cinemas (the RAID is there so that you will survive a single drive failure). But when one of a set that was purchased at the same time fails and all are past their life expectancy, I would, indeed, swap out all of them.

 |  IP: Logged

Marcel Birgelen
Film God

Posts: 3357
From: Maastricht, Limburg, Netherlands
Registered: Feb 2012


 - posted 02-26-2018 05:24 PM      Profile for Marcel Birgelen   Email Marcel Birgelen   Send New Private Message       Edit/Delete Post 
quote: Scott Norwood
Personally, I wouldn't replace the drives on a set schedule. No one does this in data centers, and there is no reason to do it in cinemas (the RAID is there so that you will survive a single drive failure). But when one of a set that was purchased at the same time fails and all are past their life expectancy, I would, indeed, swap out all of them.
In many datacenters, much of the equipment gets thrown out after about 4 or so years, because it's become under-performing.

Still, I've seen enterprise environments that prematurely started to replace their disks in large storage arrays after about 4 years and not all at the same time but gradual.

There's an important note for SSDs which not everybody is getting. While the MTBF of "rotating rust has a large standard diviation, many SSDs have a rather small standard deviation. Meaning, SSDs are far more prone to fail at the same time, especially when operating in a RAID. Once the number of writes on an SSD reaches a critical level, it's prone to total failure. Also, many SSD failures aren't soft or gradual failures like with hard-disks, but immediate hard failures, where the entire SSD becomes inaccessible.

 |  IP: Logged



All times are Central (GMT -6:00)
This topic comprises 4 pages: 1  2  3  4 
 
   Close Topic    Move Topic    Delete Topic    next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:



Powered by Infopop Corporation
UBB.classicTM 6.3.1.2

The Film-Tech Forums are designed for various members related to the cinema industry to express their opinions, viewpoints and testimonials on various products, services and events based upon speculation, personal knowledge and factual information through use, therefore all views represented here allow no liability upon the publishers of this web site and the owners of said views assume no liability for any ill will resulting from these postings. The posts made here are for educational as well as entertainment purposes and as such anyone viewing this portion of the website must accept these views as statements of the author of that opinion and agrees to release the authors from any and all liability.

© 1999-2020 Film-Tech Cinema Systems, LLC. All rights reserved.