FMod Posted September 23, 2016 Share Posted September 23, 2016 Again the exception to this rule is data that's constantly subject to change or replacement, like databases etc.That kind of data gets the most benefit from SSD storage. Consider random writing 8 KB blocks into a database. A 4TB HDD will give you 200 IOPS, or 1.6 MB/s.1.6 MB/s * 3,600 = 5.76 GB/hour. The Samsung Evo, which is a low-cost TLC part, offers 300 TBW endurance. That is 5.95 years of 24/365 operation - basically, 6 years. Sandisk Optimus, a budget-conscious example of an enterprise 4 TB SSD, offers 3,468 TBW endurance. That is 68 years. So, subjected to the same load as the maximum a 4 TB HDD can do in a database application, even a cheap TLC-based 4 TB SSD will hold out for 6 years. In 6 years, it will have been written off for at least a year. The company will make a couple bucks selling it on ebay, and the customer will happily run it for years more. Of course a SSD can write more data per second, that's the point. But more data per second means more work done. And to match an enterprise SSD's...One of the highest-performance configurations of HDD, 8xHe8 in RAID 0, will do 2,250 IOPS or 18 MB/s = 66 GB/hour. That is still 6 years for an enterprise SSD. The Optimus 4 TB costs $2,500: https://www.amazon.com/Sandisk-Optimus-Solid-State-Drive/dp/B00MET68CY A RAID controller plus 8 SAS drives will run you at least $1,200. Now, that is twice cheaper. But consider that RAID 0 is notoriously unreliable, and that the HDD array at this rate is at 99% utilization, while the SSD is at a mere 20% for writes and 3% for reads. If you know any queuing theory, you know what this means - the SSD can take a several times higher peak load without missing a beat, while the HDD array can't take any variation in load at all. Now, databases aren't used 24/365. A supermajority of database accesses happens 8 hours a day, 5 days a week, 50 weeks a year. Your peak load is at a minimum 450% your 365/24 average load. Realistically, since you have peaks in specific hours of the workday (booting up in the morning), it's further higher. Then consider the server rack space taken up and the power consumed by 8x 3.5" HDD instead of a single 2.5" SSD, and you'll see why, personally, I will never recommend, support, sign off on, or otherwise condone the idea of using mechanical HDD for all but the most lightly loaded database applications.Write endurance is a complete non-issue here: any RDMBS type workload that will wear out a SSD within a decade, will overload a HDD array to a freeze in the 1st day of operation. Now, there is one thing HDD win at: video. Since it's all sequential, HDD can write much faster, at 100 MB/s and above, and this kind of load will wear a SSD out. But HDD aren't competing with SSD here to begin with, they're competing with tape, and they're competing with it on cost. Link to comment Share on other sites More sharing options...
TheMastersSon Posted September 23, 2016 Share Posted September 23, 2016 (edited) FMod, I agree with most of that. The write cycle limitation is much less of an issue with current SSDs than it was with older ones, to the point where larger companies (like eBay) who can afford to routinely swap out system hardware according to warranty period have moved to SSDs to handle some or maybe most of their database workload. I sometimes give advice from force of habit, and this one ("no SSDs for dynamic storage") will take me some time to get over. :smile: EDIT: The first few SSD iterations were so bad they could be crashed simply by running Windows 24/7 for a few weeks. This was before management routines were optimized, and due to pagefile maintenaince Windows never stops writing to the same set of blocks of a drive. It's not a problem for mechanical drives but it was a silver bullet for SSDs prior to the auto-relocation/trim/etc functions of current firmware and drivers. Edited September 23, 2016 by TheMastersSon Link to comment Share on other sites More sharing options...
FMod Posted September 24, 2016 Share Posted September 24, 2016 There hardly ever really was a time when there was a valid need to avoid SSD in write-heavy applications. Maybe in the early days of flash, back when flash drives weren't yet called SSD and didn't have conventional interfaces anyway, so it's not like you could if you wanted to. One of the first really popular consumer SSD, the Intel X25, which came out in 2008 (and was my first SSD at home too), had its endurance tested, for its cheapest 80GB version, at 15 TB of 4 kB writes, or 150+ TB of sequential writes. I have heard of a some X25's dying from heavy server loads, but none that were used in home desktops. Now, you can calculate how long it takes the X25 to do 15 TB of writes, but why? Count how long it takes a hard drive. A 2008 HDD could, on a good day, pull 100 IOPS - or 400 kB/s under a 4 kB workload. That's 10,400 hours, or just over 5 years under a 9 to 5 schedule. There was a short era in the mid-late 2000s of really crappy low-cost almost fake SSD with bad controllers, but even their real problem was abysmal performance. You'd still sooner toss them and go back to your HDD than wear them out, though you could.Even among very low cost drives, I can't recall reports of the original 7", 2-4 GB EEE PC (no, not RAM, it's 2-4 GB flash storage) running out of write cycles, and these were really at risk, since their small 512 MB RAM meant frequent swapping. The SSD write cycles paranoia has always been supported by intentional flash burn scenarios, with workloads that couldn't have been executed on a conventional HDD in the first place. P.S. As an aside: The HDD industry really killed itself with a mix of overconfidence and a complete lack of confidence in their product, instead of a healthy middle ground. 2011 saw soaring margins, and some of these were used to buy SSD companies in a half-hearted attempt to just buy out the competitor. In 2012-2015, there was a large market demand for hybrid SSD+HDD products, which would deliver HDD capacity with mostly SSD performance. The HDD industry's response was completely botched: Seagate overconfidently paired a slow drive with a far-too-small 8GB buffer, while WD plain piggybacked a HDD on top of a SSD; one bad attempt each. The solution the market needed was software, efficient caching solutions, multi-tiered storage, and that's a market HDD makers had a chance to corner. A late-2016 performance HDD should've been a sealed unit with a helium section of 3-6x 1.5 TiB 7200 platters for desktops or 1-3x 1 TiB for laptops, with a 64-256 GB (1:64-1:16) flash section, a 1-4 GB RAM buffer, a failover capacitor, connected over a SFF style NVMe PCI-E link (not the SATA Express monstrosity) at 1.2+ GB/s, and a deduplicated cloud backup. The secret sauce would've been in having the flash and the platters operated by the same controller, absorbing a few seconds of data into the RAM buffer and making a content-based guess on where to store it, and magnetic media serving as a GC buffer for NAND. Possibly a move to unified 2.5" platters and 8,400 rpm for desktops. This isn't far-fetched, dedicated accelerators do much of it. If the above was too technical, think Honda NSX, hybrid tech that augments a piston engine with electrics. A large flash buffer with an adaptive caching solution would've given consumers quick all-flash boot and wakeup, "all-electric" operation for web browsing, potentially extended by cloud access for small cache misses. This kind of drives, while probably in the $180-$200 range for a 4 TiB desktop or 2 TiB laptop model, could compete with SSD in both spaces, giving consumers comparable performance 75% of the time, while keeping capacity out of SSD reach. Now it's too late, as the market itself is vanishing due to substitution by cheap SSD, and what little is left of it is controlled by SAN and dedicated flash accelerators. Link to comment Share on other sites More sharing options...
TheMastersSon Posted September 24, 2016 Share Posted September 24, 2016 (edited) "There hardly ever really was a time when there was a valid need to avoid SSD in write-heavy applications." That's as far as I got. There's just no agreeing with some people. Uh, yes there was hardly ever a time when there was a valid need to avoid SSD in write-heavy applications. You're communicating with one of the victims of early SSD iterations. Windows routinely crashed and burned simply by leaving it running for a few weeks. Look it's inevitable that every forum in the world has at least one egomaniacal, myopic fool who absolutely must come over the top of anyone and everyone's advice and info, and have the last word in any discussion. I spent 40 years being you, and I'm very glad I'm not anymore. But if you could at least tone down the Melvin routine it would be appreciated. Try to build up and add to in discussions instead of tear down and make ridiculously overreaching claims (I've lost count by now, but my favorite so far is "people don't use databases 24/7". Ask eBay or Google or Level3 about that claim, etc etc) simply as a platform to display unarguably extensive knowledge. Edited September 24, 2016 by TheMastersSon Link to comment Share on other sites More sharing options...
FMod Posted September 25, 2016 Share Posted September 25, 2016 Uh, yes there was hardly ever a time when there was a valid need to avoid SSD in write-heavy applications. You're communicating with one of the victims of early SSD iterations. Windows routinely crashed and burned simply by leaving it running for a few weeks. Yes, there was a time when really low quality SSD were prevalent on the consumer market. Their problems were mostly due to old Jmicron controllers, they sucked. But even that it wasn't so much about running out of cycles as it was about poorly designed controllers. Although these controllers certainly did kill some drives with write amplification. I wasn't trying to showoff or anything, just elaborated on a point more than was necessary. (As for large companies with 24/7 databases, they work with more complex storage solutions than a single SSD or HDD array in the first place. I recalled a situation that's more familiar to me and closer in scope to consumer drive choice.) Link to comment Share on other sites More sharing options...
Erik005 Posted September 25, 2016 Share Posted September 25, 2016 (edited) Uh, yes there was hardly ever a time when there was a valid need to avoid SSD in write-heavy applications. You're communicating with one of the victims of early SSD iterations. Windows routinely crashed and burned simply by leaving it running for a few weeks. Yes, there was a time when really low quality SSD were prevalent on the consumer market. Their problems were mostly due to old Jmicron controllers, they sucked. But even that it wasn't so much about running out of cycles as it was about poorly designed controllers. Although these controllers certainly did kill some drives with write amplification. I wasn't trying to showoff or anything, just elaborated on a point more than was necessary. (As for large companies with 24/7 databases, they work with more complex storage solutions than a single SSD or HDD array in the first place. I recalled a situation that's more familiar to me and closer in scope to consumer drive choice.) I don't think there was a major SSD manufacturer that didn't have problems when SSD's went mainstream, OCZ took a few major hits with the vertex series and even my crucial m4 had a weird bug that caused the system to shut down every 100hours of operation. Edited September 25, 2016 by Erik005 Link to comment Share on other sites More sharing options...
TheMastersSon Posted September 26, 2016 Share Posted September 26, 2016 The problem was well known and widespread enough that I didn't think mention of it would be controversial. It was the very motivation for updated drive features such as wear leveling etc in firmware and drivers, as mentioned, also we were riding this bleeding edge at the time. What's irritating is having our specific experience responded to by 18 paragraphs of technical info that attempts to invalidate our experience. :) Link to comment Share on other sites More sharing options...
FMod Posted September 29, 2016 Share Posted September 29, 2016 But I'm not theorizing either, I've also had specific personal experience with the first consumer SSD generation. My first home SSD was an Intel X25, and, in 2008, I think I count as an early adopter, can't recall anyone else outside of hardcore enthusiast circles who had any flash drive other than USB sticks and SD cards. This drive has been running perfectly for me until a year back, when I just ran out of ports, and AFAIK so it does for its new owner. The problem did exist. But it wasn't universal - at the same time, enterprise SSD were well-developed, getting common, and didn't suffer from anywhere near the range of problems cheap consumer drives did.IOW, IMO, bad consumer SSD was a problem of trying to do them on the cheap - and botching it - more so than that of the flash technology itself. Link to comment Share on other sites More sharing options...
xpwnStarx Posted October 22, 2016 Share Posted October 22, 2016 (edited) The SSD's did have issues in the beginning which is why I didnt adopt early. (super broad statement lol). I also appreciated your insight FMod. Oh and if you can afford a 4TB SSD, dooo it. Edited October 22, 2016 by xpwnStarx Link to comment Share on other sites More sharing options...
Fatalmasterpiece Posted October 22, 2016 Share Posted October 22, 2016 Buy a 250-500gb SSD for your Windows and gaming installations. Use an additional 1TB HDD for your music, photos, movies etc. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now