phoneyLogic Posted March 27, 2016 Share Posted March 27, 2016 (edited) The problem with the i7 6700k is that they generally require DDR4 (which is overpriced) and will be forced into Windows 10 come 2017. Power consumption is similar to 4790; if you're after efficiency and performance the Broadwell 5775C is the "winner" but honestly we're picking at a few watts and when talking "high performance gaming computer" it seems kind of silly to fight about "power efficiency." I was comparing the power consumption of the high performance CPUs in the consumer branch of AMD and Intel. Of cause there are CPUs which consume less energy than the high performance ones. I for myself am a fan of highly energy efficient CPUs, which however are not predestined for gaming, but for running 24/7. However, a Core i7-4790 Processor for socket FCLGA1150 and DDR3 RAM of cause is an excellent choice for gaming as well. Edited March 27, 2016 by tortured Tomato Link to comment Share on other sites More sharing options...
obobski Posted March 27, 2016 Share Posted March 27, 2016 Just to throw a wrench into the mix relating to Z170 (or any x170 boards for that matter) be aware that some have the USB 2.0 ports on an internal hub connected to the 3.0 controller. Q: What does this mean?A: If you are installing an OS that doesn't have integrated drivers for the USB 3.0 controller (think Windows 10 and to some degree Windows 8.1), then none of the USB ports will work with a mouse or keyboard. There's a reason every one of those boards has a PS/2 port on them. That said I had a customer who insisted on Windows 7 with a Z170 board and a i7-6700k. We had to procure a PS/2 keyboard and a sata DVD drive so we could install the system and the USB 3.0 drivers. Once that was done we could continue and use a USB keyboard and mouse as normal. Not all x170 boards are like this, and frankly in my experience it was just that one brand which I can't be sure which one it was since most of my customers go for Windows 10 these days. As previously mentioned Microsoft is pushing, along with Intel, to require all the newer processors be run on the newer operating systems. From what I read it was unclear if they would somehow force people to move to Windows 10 or if they just wouldn't be back porting support for newer upcoming cpu's back to Windows 7 and 8/8.1. Microsoft have openly said they will force all Skylake (and later) machines to update to Windows 10 in 2017, and will not allow Intel to offer driver support for the next platform (Kaby something or other - Island? Lake? Sea? I don't remember) and its PCH for anything "less" than Windows 10. AMD is being dragged into this too, but I'm not sure with what platform, since their current CPUs are a number of years old (upside here can be if you need more legacy support it may be easier to find on AM3), and they haven't released a "new" model in quite a while. I was comparing the power consumption of the high performance CPUs in the consumer branch of AMD and Intel. Of cause there are CPUs which consume less energy than the high performance ones. I for myself am a fan of highly energy efficient CPUs, which however are not predestined for gaming, but for running 24/7. However, a Core i7-4790 Processor for socket FCLGA1150 and DDR3 RAM of cause is an excellent choice for gaming as well. Oh yeah AMD vs Intel for power draw these days is totally mismatched - the TOTL AMD CPUs are >200W TDP (liquid cooling required, per AMD specifications) and slower than most of Intel's nicer chips. As far as current Intel high end offerings, the 5775C is still the best choice - no forced upgrade to Win10, works on LGA 1150 (at least with most boards - check compatibility before you buy), faster than 4770/4790/6700 in many cases, and lower TDP. Example:https://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/6 (just browse around; their power consumption measures show the 4790 beating the 6700 too; honestly I've yet to see anything that makes a strong case for Skylake over the 1150 platform with one of the nicer 4600/4700/5700 chips). Link to comment Share on other sites More sharing options...
phoneyLogic Posted March 28, 2016 Share Posted March 28, 2016 (edited) Oh yeah AMD vs Intel for power draw these days is totally mismatched - the TOTL AMD CPUs are >200W TDP (liquid cooling required, per AMD specifications) and slower than most of Intel's nicer chips. As far as current Intel high end offerings, the 5775C is still the best choice - no forced upgrade to Win10, works on LGA 1150 (at least with most boards - check compatibility before you buy), faster than 4770/4790/6700 in many cases, and lower TDP. Example:https://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/6 (just browse around; their power consumption measures show the 4790 beating the 6700 too; honestly I've yet to see anything that makes a strong case for Skylake over the 1150 platform with one of the nicer 4600/4700/5700 chips).Frankly, that the i7-5775C is outpacing an i7-6700k doesn't make much sense to me. There are differences:http://ark.intel.com/de/compare/88200,88195,88040 Maybe it is in regards of gaming in conjunction with DDR4 and on what the tested games are optimised. However in other tests an i7-6700k outpaces an i7-5775C clearly.http://www.technikaffe.de/cpu_vergleich-intel_core_i7_6700k-518-vs-intel_core_i7_5775c-528 Despite of being energy optimised I for myself would go for a i7-6700T (or a corresponding Xeon because of virtualisation features and ECC support - nothing that would be needed or helping game performance). They still are beast. Edited March 28, 2016 by tortured Tomato Link to comment Share on other sites More sharing options...
obobski Posted March 28, 2016 Share Posted March 28, 2016 (edited) Frankly, that the i7-5775C is outpacing an i7-6700k doesn't make much sense to me. There are differences:http://ark.intel.com/de/compare/88200,88195,88040 Maybe it is in regards of gaming in conjunction with DDR4 and on what the tested games are optimised. However in other tests an i7-6700k outpaces an i7-5775C clearly.http://www.technikaffe.de/cpu_vergleich-intel_core_i7_6700k-518-vs-intel_core_i7_5775c-528 Despite of being energy optimised I for myself would go for a i7-6700T (or a corresponding Xeon because of virtualisation features and ECC support - nothing that would be needed or helping game performance). They still are beast. Broadwell has slightly higher IPC than anything "near" it (e.g. Haswell, Skylake, Ivy Bridge) because of the 128MB L4 eDRAM cache, at least that's the current/mainstream explanation (it will be most telling when Broadwell-EP comes out, which won't have the L4 eDRAM, and we can see comparisons with that). Skylake was largely oversold as "the second coming" or similar simply because it is new, but continues the multi-year trend of utter stagnation - in the TechReport review you'll also see Sandy Bridge and Ivy Bridge hardware happily keeping pace with Haswell, Broadwell, and Skylake too (okay sure it may be 10% slower or similar, but its still running all the same games at playable frame-rates - it isn't like its orders of magnitude slower). Anandtech has gone more in-depth on this:http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/10 (the whole review is worth reading) DDR4 is largely a hypetrain too; its "faster" but at the cost of generally much higher latency, so overall throughput doesn't increase (but cost certainly does) - this is similarly an area of stagnation in "new" hardware design. A very similar phenomenon exists with "high speed" DDR3 in general as well, with reviews/benchmarks consistently showing at best maybe a few % difference between DDR3-1333 and anything significantly "faster" (because again, latency is always going up too). The "free lunch" period of just running clockspeed up and getting new architectures on process shrinks that can offer higher IPC and more features (e.g. the move from Athlon -> AthlonXP -> Athlon64, or Pentium 3 -> 4 > Core 2) is largely a thing of the past, for better or for worse. Edited March 28, 2016 by obobski Link to comment Share on other sites More sharing options...
phoneyLogic Posted April 8, 2016 Share Posted April 8, 2016 (edited) Broadwell has slightly higher IPC than anything "near" it (e.g. Haswell, Skylake, Ivy Bridge) because of the 128MB L4 eDRAM cache, at least that's the current/mainstream explanation (it will be most telling when Broadwell-EP comes out, which won't have the L4 eDRAM, and we can see comparisons with that). Skylake was largely oversold as "the second coming" or similar simply because it is new, but continues the multi-year trend of utter stagnation - in the TechReport review you'll also see Sandy Bridge and Ivy Bridge hardware happily keeping pace with Haswell, Broadwell, and Skylake too (okay sure it may be 10% slower or similar, but its still running all the same games at playable frame-rates - it isn't like its orders of magnitude slower). Anandtech has gone more in-depth on this: http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/10 (the whole review is worth reading) DDR4 is largely a hypetrain too; its "faster" but at the cost of generally much higher latency, so overall throughput doesn't increase (but cost certainly does) - this is similarly an area of stagnation in "new" hardware design. A very similar phenomenon exists with "high speed" DDR3 in general as well, with reviews/benchmarks consistently showing at best maybe a few % difference between DDR3-1333 and anything significantly "faster" (because again, latency is always going up too). The "free lunch" period of just running clockspeed up and getting new architectures on process shrinks that can offer higher IPC and more features (e.g. the move from Athlon -> AthlonXP -> Athlon64, or Pentium 3 -> 4 > Core 2) is largely a thing of the past, for better or for worse. That's indeed interesting in regards of game performance. However, if you compare other tests, an i7-4790k is remarkable faster than the i7-5775c. http://www.technikaffe.de/cpu_vergleich-intel_core_i7_4790k-411-vs-intel_core_i7_5775c-528 The most prominent feature of the 5775c is its powerful iGPU. In this regards, the i7-5775c is a very good choice if you don't want to use a dedicated GPU. It however does not support DirectX 12. The gaming comparison is interesting and somewhat surprising. Since the i7-5775c still is quite expensive I wouldn’t exactly favour it over another chip in this category just because of that few fps. – unless I’m interested in having a strong iGPU performance while not being interested in DirectX 12 on the other hand. It still is a very good CPU after all. It is a very good balance between power, graphics and energy consumption. Would be very interesting for strong office PCs and according to your linked tests, it has very good gaming capabilities as well (the iGPU of cause does not really cope with a dedicated one). So yes, it is a processor one can buy. One also could buy an i5. Those are not that expensive and usually just don't feature hyperthreading, which is not so important for gaming as well. If I remember correctly, there were tests that pointed out that hyperthreading could even lead to slightly worse gaming performance. (Don't know what's left from this findings today.) However, hyperthreading absolutely is a valuable feature in other regards and can give a huge advantage in rendering videos or whatnot. (I'm not telling this for you, you probably already know that. It might be interesting for folks which is not so informed about chip features and why they could need them or not.) Edited April 8, 2016 by tortured Tomato Link to comment Share on other sites More sharing options...
obobski Posted April 9, 2016 Share Posted April 9, 2016 Completely agree on the i5 4500/4600s as another great candidate on 1150 - HT doesn't do much, if anything, for gaming, however with newer systems and OSes it shouldn't hurt either. Link to comment Share on other sites More sharing options...
PillMonster Posted April 21, 2016 Share Posted April 21, 2016 (edited) Just to throw a wrench into the mix relating to Z170 (or any x170 boards for that matter) be aware that some have the USB 2.0 ports on an internal hub connected to the 3.0 controller. Q: What does this mean?A: If you are installing an OS that doesn't have integrated drivers for the USB 3.0 controller (think Windows 10 and to some degree Windows 8.1), then none of the USB ports will work with a mouse or keyboard. There's a reason every one of those boards has a PS/2 port on them. That said I had a customer who insisted on Windows 7 with a Z170 board and a i7-6700k. We had to procure a PS/2 keyboard and a sata DVD drive so we could install the system and the USB 3.0 drivers. Once that was done we could continue and use a USB keyboard and mouse as normal. Not all x170 boards are like this, and frankly in my experience it was just that one brand which I can't be sure which one it was since most of my customers go for Windows 10 these days. As previously mentioned Microsoft is pushing, along with Intel, to require all the newer processors be run on the newer operating systems. From what I read it was unclear if they would somehow force people to move to Windows 10 or if they just wouldn't be back porting support for newer upcoming cpu's back to Windows 7 and 8/8.1.Microsoft have openly said they will force all Skylake (and later) machines to update to Windows 10 in 2017, and will not allow Intel to offer driver support for the next platform (Kaby something or other - Island? Lake? Sea? I don't remember) and its PCH for anything "less" than Windows 10. AMD is being dragged into this too, but I'm not sure with what platform, since their current CPUs are a number of years old (upside here can be if you need more legacy support it may be easier to find on AM3), and they haven't released a "new" model in quite a while. I was comparing the power consumption of the high performance CPUs in the consumer branch of AMD and Intel. Of cause there are CPUs which consume less energy than the high performance ones. I for myself am a fan of highly energy efficient CPUs, which however are not predestined for gaming, but for running 24/7. However, a Core i7-4790 Processor for socket FCLGA1150 and DDR3 RAM of cause is an excellent choice for gaming as well. Oh yeah AMD vs Intel for power draw these days is totally mismatched - the TOTL AMD CPUs are >200W TDP (liquid cooling required, per AMD specifications) and slower than most of Intel's nicer chips. As far as current Intel high end offerings, the 5775C is still the best choice - no forced upgrade to Win10, works on LGA 1150 (at least with most boards - check compatibility before you buy), faster than 4770/4790/6700 in many cases, and lower TDP. Example:https://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/6 (just browse around; their power consumption measures show the 4790 beating the 6700 too; honestly I've yet to see anything that makes a strong case for Skylake over the 1150 platform with one of the nicer 4600/4700/5700 chips). I'm getting it all online, so don't worry about where i'm getting my stuff. Do not worry about the OS, speakers or monitor I already have that sorted. So the fun part is making a gaming PC that is $2500 Aus or less, which is $1884 US, so it's fairly decent when you look at if from the US angle. I'm not looking for 4K or overclock the hell or stuffing the PC full of mods. I want it to last for the next 4-5 years playing recent games, so I need a PC that can be suitable for my gaming needs. Does that help? And no I can't reuse any of the hardware because the current PC is moving towards as the family PC, so this new build must be made from scratch. No, it really doesn't help - just because you have a rough idea of the exchange rate between AUD and USD doesn't mean retail pricing will correlate, or availability with correlate. If you have a specific online vendor you're working with its easier if "we" know that, because then we can just go on there and play around with it, versus throwing out generic suggestions and going back and forth for 30 posts "well that brand isn't available is this brand good?" "well that part isn't available how about this one?" etc (this becomes especially true of power supplies as not all OEMs/marques will sell to all markets due to regulatory hurdles and distribution agreements). The OS does matter too - if you're wanting to stick with Windows 7 or 8.1 you have to make different choices with hardware because Microsoft is forcing upgrades to Windows 10 for newer Intel and AMD systems in 2017 (as in, you don't get a choice in the matter, you will have Windows 10 and you will like it); if you're looking to go with Windows 10 it also matters because of its various...features...like forced driver updates, that can wreak havoc with hardware/IHVs that are known for floating turkeys when it comes to drivers (e.g. nvidia). As far as the 4-5 years, let me just cut that off at the head: there is no future proof, there is no guaranteed performance, the future is not set, there is no fate but what we make, yadda yadda yadda. You get to build something that works well today and if XYZ whizbang new game comes out tomorrow and requires an all new PC just to run, that's just part of it - if you absolutely cannot live with that reality, get a console. They're designed to work on 7-year lifecycles (so if you bought an Xbox One in 2013, it will still be playing new games in 2020, for example). Recent history has shown a relative stagnation in both performance and performance requirements, to the point that "brand new" really isn't anything to be up in arms about these days, but that's reading things backwards - just because Core 2s and GeForce 8800s have aged fantastically over the last ten years doesn't mean they'll be around for the next ten. As I said earlier this is absolutely not true. Skylake will run on any OS you like, this was never in doubt.MS at one stage said that W7/W8 systems with Skylake CPU's would not receive non-critical security updates via WUS.. That is all. Not related to OS support in any way shape or form. So you're making some very wild claims. PS fwiw your breakdown of IPC performance is way off the mark. Cache size has nothing at all to do with CPU IPC. IPC revolves around cache latency, pipeline length and instruction fetch time not cache size. It's almost sounding like you're making this stuff up as u go along........ ...IPC doesn't even matter in DX11/12 it's all deferred rendering anyway.Cores win, an if we're talking dispatch AMD Octacore have a higher IPC than Intel 4C/4T because HT is logical.An Intel 4C/4T CPU scales @ around 30% in single vs multi threading. An 8 core AMD FX multi threaded scales at closer to 80%; FX can simeltaneosly dispatch all threads at once, whereas Intel HT cannot. @OP If you're still following this thread I live in NZ and would be happy to help you build a machine, NZ Aus dollar are practically the same anyway. U can pm me if u like. :smile: Edited April 21, 2016 by PillMonster Link to comment Share on other sites More sharing options...
obobski Posted April 21, 2016 Share Posted April 21, 2016 PS fwiw your breakdown of IPC performance is way off the mark. Cache size has nothing at all to do with CPU IPC. IPC revolves around cache latency, pipeline length and instruction fetch time not cache size. It's almost sounding like you're making this stuff up as u go along........ ...IPC doesn't even matter in DX11/12 it's all deferred rendering anyway.Cores win, an if we're talking dispatch AMD Octacore have a higher IPC than Intel 4C/4T because HT is logical.An Intel 4C/4T CPU scales @ around 30% in single vs multi threading. An 8 core AMD FX multi threaded scales at closer to 80%; FX can simeltaneosly dispatch all threads at once, whereas Intel HT cannot. Please don't drag other threads into places they don't make sense - let that discussion happen where it started. To the rest: Cache size can have a measurable impact on per-clock performance and this has been well documented for decades (e.g. in reviews, in text books, even on Wikipedia). I'll let you go dig up evidence to support your counter-claims (I've provided evidence in support of my original claim, and stand by it). AMD FX cannot do 8 fully independent issues at once (this is also a known fact; AMD doesn't even claim to the contrary - the "16-way" CMT Opterons can do 8 complete issues at once, but their clockspeeds are significantly lower than FX). The AMD FX "8 core" CPUs implement four CMT modules, each of which represent a single FPU, single memory controller, single L1 cache and fetch/decode, and pair of ALUs, L2 caches, AGUs, and symmetrical FMAC (which can be "joined" for AVX). The entire package shares its L3. From the scheduler's perspective it should be loaded similarly to SMT (e.g. HyperThreading) where each module is treated as a separate CPU (which it is), and the secondary resources on each module as a secondary logical CPU (this behavior is native for Windows 8, and was patched into Windows 7 and Windows 2008 R2 - you can get it here: http://downloads.guru3d.com/AMD-Bulldozer-hotfix-from-Microsoft-download-2831.html). There are no benchmarks that I'm aware of that show any AMD FX in current production even approaching contemporary (or even semi-contemporary) Intel processors for performance in gaming, be it DirectX 9, 10, 11, or 12. Again I'll let you go dig up evidence to support your own claims (I've provided links to benchmarks that include quad CMT module processors, such as the FX-8370, and DirectX 10 and 11 games - they consistently are bottom of the pack for performance, despite having "more cores" and higher clocks). As far as handling all dispatches at once - Intel quad cores have always been "real quad cores" with four independent pipelines on one package; they implement HyperThreading on some of them, and its performance benefits are variable to the workload (it usually does nothing for gaming, but can show improvements for multimedia applications in many settings - again, I've provided links to benchmarks that reflect this). Can do without the personal attacks/jabs too. Link to comment Share on other sites More sharing options...
PillMonster Posted April 22, 2016 Share Posted April 22, 2016 (edited) PS fwiw your breakdown of IPC performance is way off the mark. Cache size has nothing at all to do with CPU IPC. IPC revolves around cache latency, pipeline length and instruction fetch time not cache size. It's almost sounding like you're making this stuff up as u go along........ ...IPC doesn't even matter in DX11/12 it's all deferred rendering anyway.Cores win, an if we're talking dispatch AMD Octacore have a higher IPC than Intel 4C/4T because HT is logical.An Intel 4C/4T CPU scales @ around 30% in single vs multi threading. An 8 core AMD FX multi threaded scales at closer to 80%; FX can simeltaneosly dispatch all threads at once, whereas Intel HT cannot. Please don't drag other threads into places they don't make sense - let that discussion happen where it started. To the rest: Cache size can have a measurable impact on per-clock performance and this has been well documented for decades (e.g. in reviews, in text books, even on Wikipedia). I'll let you go dig up evidence to support your counter-claims (I've provided evidence in support of my original claim, and stand by it). AMD FX cannot do 8 fully independent issues at once (this is also a known fact; AMD doesn't even claim to the contrary - the "16-way" CMT Opterons can do 8 complete issues at once, but their clockspeeds are significantly lower than FX). The AMD FX "8 core" CPUs implement four CMT modules, each of which represent a single FPU, single memory controller, single L1 cache and fetch/decode, and pair of ALUs, L2 caches, AGUs, and symmetrical FMAC (which can be "joined" for AVX). The entire package shares its L3. From the scheduler's perspective it should be loaded similarly to SMT (e.g. HyperThreading) where each module is treated as a separate CPU (which it is), and the secondary resources on each module as a secondary logical CPU (this behavior is native for Windows 8, and was patched into Windows 7 and Windows 2008 R2 - you can get it here: http://downloads.guru3d.com/AMD-Bulldozer-hotfix-from-Microsoft-download-2831.html). There are no benchmarks that I'm aware of that show any AMD FX in current production even approaching contemporary (or even semi-contemporary) Intel processors for performance in gaming, be it DirectX 9, 10, 11, or 12. Again I'll let you go dig up evidence to support your own claims (I've provided links to benchmarks that include quad CMT module processors, such as the FX-8370, and DirectX 10 and 11 games - they consistently are bottom of the pack for performance, despite having "more cores" and higher clocks). As far as handling all dispatches at once - Intel quad cores have always been "real quad cores" with four independent pipelines on one package; they implement HyperThreading on some of them, and its performance benefits are variable to the workload (it usually does nothing for gaming, but can show improvements for multimedia applications in many settings - again, I've provided links to benchmarks that reflect this). Can do without the personal attacks/jabs too. Hmmm only just saw your post as I have notifications disabled. But yes as you say 4C/4T indicates 4 real cores. I made a typo meant be 4C/8T. I picked it up afterward but didn't bother correcting. 4C/4T already defines 4 cores, as opposed to 2C/4T and I was talking about HT specifically so assumed u would realize it was a typo...but u know what happens when people assume. [ :wink:] . And if I came across as attacking you personally, I apologise. [:)] As for the rest of your post...well what can I say? You've copypasted info from the net oblivious as to how it actually applies to architecture then tried to wing it by making up a story with a few buzzwords thrown in to go along. Did you just assume I wouldn't notice? Dude I work for Fujitsu Field Services (if that means anything). (btw reason I knew about Skylake - Fujitsu is MS partner), A few brief points, I'm not going over everything: *"The scheduler should be loaded similarly to SMT?" What does that even mean? It's like saying "the DVD should be loaded similarly to baked potato". *Up to W7 there was one scheduling algorithm for all CPU's: Intel's. Contrary to what you think, the BD hotfix never forced Intel scheduling on AMD. In fact it does the exact opposite.MS released the fix to ensure Intel HT scheduling would NOT be used on Piledriver. Intel HT is not even remotely related to AMD's MT implementation. *HT is not modules. HT is logical cores using spare cycles. *A 12 core Intel Xenon can dispatch 8 threads simultaneously. If you'd like to pursue a debate on that point I'll be happy to provide the Intel whitepaper which is sitting on my desktop. *SMT is not the same as SMD, SMD is AMD *All AMD FX processors can execute up to 8 threads simultaneously. The FPU doesn't correlate to SMD execution, if I'm wrong please explain how. *HT and SMT different things.. HT is a subset of SMT. *SMT isn't SMD either. Threading is not execution. Dispatch is execution. *CMT is a marketing term made up by AMD to describe how threading works on Bulldozer. It's not really an architecture. *Pipelines; not 4 pipelines there is only one pipeline; everything. " Again I'll let you go dig up evidence to support your own claims (I've provided links to benchmarks that include quad CMT module processors, such as the FX-8370, and DirectX 10 and 11 games - they consistently are bottom of the pack for performance, despite having "more cores" and higher clocks)." lol your one link to a bugged benchtest doesn't mean jack. All it shows is a flawed benchmark with a flawed game engine that uses DX11st on 2 cores. DX11 has been out for 3 years...nearly every AAA title since BF3 has full DX11 multithreading support. Like Tortured Tomato said, an i7 will beat i5 in any properly threaded DX11 game so will Vishera. Otherwise it's not multi threaded.Want benchmarks? there's dozens all over the net. Anyone who hasn't been living in a cave has already seen them. Every next gen console port runs on DX11. *How are DX9 or DX10 relevant to this discussion?? I don't recall mentioning either one, the topic is DX11. *Skylake supports both DDR3 & DDR4 not "mostly DDR4" *Faster RAM has LOWER latency, not higher. 2400mhz DDR3 will smoke 1333mhz where latency is concerned. *"ATI chipsets were good up til 7/8 series" Really? Strange because ATI only designed GPU's, they have never been involved with chipsets. After going back over your posts the lack of technical knowledge is clearly evident. I'm 41 and lead a busy life.Spamming benchmarks just to "prove" facts for your benefit is a waste of energy. I could - however I've seen these 100 times already, my PB account is full of them. Not interested. If and when you'd like to stop playing charades and have a serious discussion, pm me. I will put aside some time to address all points providing plenty of supporting documentation.Industry technical whitepapers from Intel and AMD. chipset makers and vendors etc...Wikepedia isn't a 100% reliable source I usually avoid it if possible. Seeya :wink: Edited April 23, 2016 by PillMonster Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now