Jump to content

Can my 2400 RAM run 1600 faster ?


Recommended Posts

Okay so my questionis can I get myRAM to run faster at 1600 MHz and 1.5v ? I know purchasing 2400 MHz is a waste right now but idc , I bought it thinking I could run it at lower timings than the average 1600 MHz RAM . I obviously didn't give that much thought until AFTER buying the new parts . I can run the XMP profile and run 2400 smooth with my i5 CPU . I don't want to burn out the Internal Memory Controller by using 1.65v on the RAM , so I want to see if it is possible to run it at 1600 MHz but with some pretty badass timings . So far tweaking and fooling around resulted in not much progress . I use MaxxMem2 to bench my RAM . With the XMP at 2400 , i have a bandwith of about 18 GB/s , which seems like sh*t to me , especially for how "good" this RAM is supposed to be . Maybe it's because I'm using the i5 4690k ? I'll list specs in spoiler tags . When I use the default timings at 1600 , i get about 13 GB/s , but the timings are kinda high (don't remember right now)

 

 

 

Specs :

Intel Core i5 4690K at 4.4 GHz Turbo (Btw , overclocking this badass processor challenges i7's and beats some , except some of the newest ones)

G.Skill TridentX (2 x 4GB) 2400 Mhz CL11 http://www.newegg.com/Product/Product.aspx?Item=N82E16820231587

Mushkin 120GB SSD

Maxtor 160GB HDD (my larger HD died getting new one soon)

Gigabyte Radeon HD 5770 OC Edition (badass card for how old it is , still slays games , upgrading soon)

ASRock Fatal1ty Z97 Killer Motherboard (highly recommend it)

700W PSU

CD/DVD Drive

 

 

Link to comment
Share on other sites

Short answer is: it doesn't matter at all. You can see differences in memory-specific benchmarks like MaxMem or SiSoft or whatever but real-world it truly doesn't matter what you do, and "big differences" for real-world applications will be like 1-2% in many cases, especially for gaming.

 

Sources:

http://www.anandtech.com/show/7364/memory-scaling-on-haswell

 

The era of "tight timings helping performance" died something like 10-15 years ago - nowadays its pretty much all the same thing, and generally the more expensive kits use the same chips (perhaps binned perhaps not) or families of chips as the cheap stuff; you're mostly paying for branding, graphics, marketing, etc.

 

The 4690k is on-par with anything else in the last few generations (on either side) for gaming too - doesn't make any significant difference one way or another, and there's no point to overclock it unless you just want to overclock it. If you want any sort of significant upgrade for gaming, its on the GPU - swap that 5770 out for something with more power behind it (e.g. a 280X) and you'll see significant improvements in gameplay. The limitation is on single-threaded performance; sure you can go dig up benchmarks showing Broadwell IGP being faster than anything before it, or improvements in multimedia encoding or synthetic benchmarks that rely on additional instruction sets (e.g. AVX) but none of that translates to gaming, especially if we're talking about TES/Fallout that are heavily single-thread bound (and they aren't really "unique" in this regard either).

 

Source on the CPU:

https://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/7

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/10

 

Fact of the matter is both memory and CPU performance has been stagnant as can be for 6-7 years now; you can go spend hilarious money on a 2011-3/DDR4, or Skylake/DDR4, or Broadwell/DDR3, or whatever setup with very expensive RAM and not have any significant, measurable benefit for gaming over many other, less expensive, options. Save your money, upgrade the graphics card, and be happy. :cool:

 

 

That having been said, what JEDEC profiles does your RAM report for 1600 and 2400? It may be able to run at slightly better timings, or it may not. If there isn't a JEDEC profile that shows better timings (and it makes me sad that "good timings" mean CAS9), then you're out into the wide world of legitimate overclocking (e.g. not just having some hand-holding application do everything for you); try the timings/voltage you want, see if it boots and POSTs at that setting, if it does, see if it'll run Prime or some other stress test for 12-24h stable, and if so, you've got a winner. If not, back it down, rinse, and repeat. You'll eventually find something that's stable. But I wouldn't waste the time on that because, as I said above, it doesn't really make a difference at the end of the day, so just go with whatever it defaults to via JEDEC.

Edited by obobski
Link to comment
Share on other sites

Nicely said . I agree with everything you said . i forgot to mention though I bought my new system on and around black friday so everything was pretty cheap , and I only paid roughly 40 bucks for the RAM so I figured why not . But yeah you cleared some things I've read before and taught me some new things , thanks for the knowledge haha . I won't bother with it , I'm cool with running it at 1600 MHz at 1.5v . I did notice things did tend to load a bit faster . I think i got kinda power hungry after I saw what my CPU could do . I have everything running at defaults now . I did notice an FPS increase after upgrading from the Phenom II x6 1055t . The 280x is nice but I have my eyes on and 390x . I would get a 290X but it doesn't have DX12 support . Maybe I could flash the 290X . I've always had better experience with ATi/AMD for graphics . The 280X's are getting cheaper , I might just get one of those since they still perform great today . I only plan on running 2k at most . Ill run my i5 at 5GHz when games will need that . Right now at 5 GHz it seems to defecate profusely over all i7's including extreme's except for one or two . I'll run more benches but I am throughly impressed with this CPU :D

Thanks again for the reply

Link to comment
Share on other sites

Nicely said . I agree with everything you said . i forgot to mention though I bought my new system on and around black friday so everything was pretty cheap , and I only paid roughly 40 bucks for the RAM so I figured why not . But yeah you cleared some things I've read before and taught me some new things , thanks for the knowledge haha . I won't bother with it , I'm cool with running it at 1600 MHz at 1.5v . I did notice things did tend to load a bit faster . I think i got kinda power hungry after I saw what my CPU could do . I have everything running at defaults now . I did notice an FPS increase after upgrading from the Phenom II x6 1055t . The 280x is nice but I have my eyes on and 390x . I would get a 290X but it doesn't have DX12 support . Maybe I could flash the 290X . I've always had better experience with ATi/AMD for graphics . The 280X's are getting cheaper , I might just get one of those since they still perform great today . I only plan on running 2k at most . Ill run my i5 at 5GHz when games will need that . Right now at 5 GHz it seems to defecate profusely over all i7's including extreme's except for one or two . I'll run more benches but I am throughly impressed with this CPU :D

Thanks again for the reply

 

Yeah - even "high end" RAM can be pretty cheap these days, and even if you'd paid more, it'd be nothing to feel bad over - it isn't like years ago where "high end" could mean like $300-400/kit insanity. I bought some Kingston for my system that's "fancy" and it was only like $60 or so; the cheap stuff (e.g. no heatspreaders, generic green PCB, etc) was like $40-50 at the time, so yeah I'll splurge the $10 for aesthetics.

 

On the rest:

 

- All GCN Radeon (which means HD 7000, 8000, 2xx, 3xx, and Fury) fully support DirectX 12 including async compute (and GCN Radeon is the *only* current GPU that supports DX12; no nVidia part fully supports it yet, and nothing at all uses it beyond some synthetic benchmarks). So there's no need to get 390X over 290X or whatever like that, and no need to try flashing anything or anything like that. The 280X was just a "for instance" example.

 

- Not surprising at all to see a recent Intel CPU beat K10 - the "current generation" of Intel CPUs (being from Nehalem forward) has been the last significant IPC jump, but everything since then is stagnated (e.g. you could've gone to a 2600k, or 3770k, or 6700k, etc it would've had the same kind of improvements for gaming). Running at 5GHz seems unnecessary unless you just want to overclock - its beyond anything that's available at retail, so its beyond anything that a developer could/should realistically require for a game (OFC there are some games that just run bad everywhere, but that's another kind of problem).

 

- At 1080p or 2K (they are slightly different things (DCI 2K is 2048x1080; Samsung produces "2K" panels that run at 2048x1152 to conform to 16:9 as well (the DCI 2K resolution is 1.90:1 which isn't quite 16:9))), the 280X is probably quite competent, as are many other mid-range or upper-mid-range cards. Alternatively take advantage of how inexpensive the 290/390 series are, and enjoy higher IQ settings (e.g. you can take better advantage of SSAA). If you're up for spending more than $200-400 on a card, consider the newer Fury series, which are a good bit faster, but run more like $500-600.

Link to comment
Share on other sites

Yeah well at least i have 2400MHz if I ever need it . I thought I would get an FPS increase by my computer using RAM as video memory when my card fills 1GB , but no , it didn't increase my FPS , that i could notice anyway . Didn't do much actually .

 

-Yeah Nvidia is behind with Dx12 , but everything i read said it is not full support with Dx12 on the 2xx series , unless did they tweak the cards and sell newer ones ? And NewEgg keeps throwing an XFX 280X in my face for about 165 . It's like they want me to buy it . It would be nice if i had money to toss out to get a FuryX :)

 

-Yeah Like i said i got all my parts cheap with black Friday sales . If it wasn't on sale I'd have gone with a newer i7 if i would have to shell out that kind of money . I agree though , which is why I reset it all to defaults . I'd like to keep my CPU running strong so I can sell it and upgrade to a another brand new system .

 

-Yeah it would be nice to get the FuryX . If the 290X goes on sale at a reasonable price , I'll get that for now . I really am baffled though , how my current card goes so strong even after 3-4 years of having it . The HD 5770 . it was rebuilt by Gigabyte and is the O.C. version . Only has 1GB VRAM . I can run it at 950 MHz GPU Clock/1375 MHz Memory Clock . Do you think it could be because of the high clock speeds ? My guess was it fills and clears the VRAM faster than average , frame by frame , which is what I think the stutters i occasionally get are from . IDK that much about that so it's only speculation on my part . , able to make use of 1GB even with texture mods and ENB . I do suffer a HUGE FPS loss with ENB at 1080p , around 25-30 FPS depending . Tranquility enb around 15 .

Link to comment
Share on other sites

- All GCN supports DX12 in hardware - that's HD 7000 to present. No "tweaks" or anything necessary, but it will/did require a driver update (just like Mantle support did) - as far as I know it's actively supported at this point as long as you have Windows 10, given that you can run DX12 benchmarks on GCN.

 

- VRAM/video memory doesn't work quite that way. Games/applications don't access it directly or even "see" it - they access an API which in turn accesses memory pages that are managed by the HAL and drivers, and resources are loaded and unloaded from memory (be it on-card or system memory) as needed/appropriate. The on-card memory is a buffer - different hardware and drivers will manage it in slightly different ways (and VRAM usage in the same game will thus vary between different cards). The 5770 being up to date for modern or modern-ish games isn't that surprising either - it supports DX11 and is moderately fast. The clockspeed by itself isn't a direct indicator of performance either - there are other variables that relate to it to it that help you get an idea of performance though. The performance hit you're seeing may be the card bumping its head on 1GB, may be the card bumping its head on memory bandwidth (how fast its on-card memory works), may be the card bumping its head on GPU power, etc.

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...