Jump to content

Anyone have one of the new 6 series cards yet?


Recommended Posts

FMOD I'm sorry I don't agree with that.. Nvidia isn't simply skimping on their cards... I like the shorter PCB, makes the card smaller for the same power as a larger one.

Actually they even added a plastic fairing and moved the fan there just to artificially make the card longer.

 

They are skimping - read TPU's review for Asus GTX 670 DC2. Asus gave it a quality PCB design with a quality power supply, topped it with quality cooling, and got it to perform better than a stock 680. Which skimps on voltage regulation too.

 

 

I get lousy FPS in BF3, permanent screen tearing in skyrim and stuttering in just about everything I play.

This is weird. Dual 6970's is a popular BF3 setup (6950 is the same chip), I haven't heard of anything like that. They usually provide well over 60fps in BF3 at 2560x1600 with maxed out settings.

 

But Radeon drivers are a bit finicky, so you could have run into such an issue because of a dirty OS, not uninstalling the old drivers, or setting something wrong, or something else. Put fresh drivers and the game on a clean W7 install and the things should change.

 

 

But if you look at the 670's OCing capabilities and compare it to the new AMD cards the 6 series as a whole wins hands down. The 670 and the 680 are both made with overclocking in mind.

What? Where did you get that idea?

 

A video card with cheap 4-phase power, a tiny aluminum heatsink, and no support for explicit clock rate setting is made with overclocking in mind?

 

 

Every half-decent review you'll find will see 7950 and 7970 overclock far more in relative terms, than their 670 and 680 counterparts, and, when both cards are overclocked, match or exceed their performance.

Both can handle about 1100 and 1200 MHz respectively, but 7950/7970 run stock at 825 or 900 MHz, while 670/680 have stock clocks of 980 or 1058 MHz, leaving less than half the headroom to go.

 

 

You have yours for why you won't and that's fine, but this thread wasn't to argue AMD vs. Nvidia so please don't turn it into that.

I have two reasons for not buying 600 series cards - two Geforces.

So this isn't about AMD vs NV, this is about specific products. If you think I'm being a fanboy here (of a company I don't even have any products from), try and check your own claims, like about 670 being built for overclocking.

Link to comment
Share on other sites

I don't know enough I guess.. From what I have seen, heard and read the 670 overclocks very well. Certainly better than my 6950's do.

 

And heres the other thing. With my two 6950's I am on 12.4 with 12.4cap 1. I recently completely wiped everything on the HDD due to a corrupt sector of windows 7. Then I bought a new drive entirely and started everything from scratch. I have been doing this for years so Im not really inexperienced with something like reinstalling drivers.

 

But enough of that. I guess what I don't understand is, if nvidia did infact skimp on their cards to the extreme that you say, why are they selling out everywhere while the AMD ones are not? I mean, if you check newegg, ncix.com, tigerdirect, etc they are all sold out. Every 670 on newegg is sold out and most of the 680's are too. I'm not saying sales amount to a quality product but you would think if they were that cheeky enough people would turn their nose up to em?

Link to comment
Share on other sites

I don't know enough I guess.. From what I have seen, heard and read the 670 overclocks very well. Certainly better than my 6950's do.

If you can't overclock a 6950, chances are you might be doing it right. What utilities were you using and how specifically did you go about it? Did you try overclocking them with Crossfire disabled, at least?

 

 

But enough of that. I guess what I don't understand is, if nvidia did infact skimp on their cards to the extreme that you say, why are they selling out everywhere while the AMD ones are not?

GK100 series GPU, in particular GK100 and GK104, had a few low-level design mistakes that led to severe yield problems. In all semiconductor manufacturing, only some chips from each wafer are working. Both Tahiti (7970 chip) and GK100 (GTX600 chip) were ready around mid 2011 and entered full-scale production at TSMC.

 

However problems with manufacturing GK100 forced it to be replaced with a smaller GK104 design, which still has low yields, and delayed the release until March 2012 instead of pre-Christmas 2011 as planned initially. AMD also had insufficient yields to launch by Christmas, but nowhere as low, so between August and December 2011 they stockpiled enough chips to launch the card in January 2012.

 

Nvidia is currently trying to rectify the problem, but until a redesign is complete and the production lines are equipped for the new chip, there's simply not enough working chips to make a reasonable number of cards. So GTX680's became what is called a "paper launch", when the card is officially launched, but not distributed to enough dealers. Such "paper launches" were very common in 2000-2005, until companies decided they were bad for business and switched to "stockpile-then-launch" business model.

 

This information comes mostly from TSMC and Nvidia's own statements.

Link to comment
Share on other sites

Well that blows... To be clear, it sounds like you are saying that this initial release of GK104 is simply a crutch until nvidia can work out the problems? So this way they make a a little bit of cash to tide them over until the real thing comes out? What will it be then the 7 series?

 

And I guess, if I was going to consider AMD again, which ones would be good to go for? The 7950 seems to be the equivalent of my 6950 only newer with higher memory bandwidth another gigabyte of VRAM and higher clocks. Wish AMD cards had the dynamic vsync and TXAA.. But maybe in time they will.

Edited by Dan3345
Link to comment
Share on other sites

I think they'll release smaller redesigned chips in the 600 series and then properly redesigned large chips in 700. But no telling when exactly it's going to happen.

GK104 actually costs a lot for NV: they have to pay for the entire wafer (TSMC took a big loss taking payment by chip, as initially arranged, and broke off that deal), from which they only get a small number of good chips. This is, at least in part, the reason behind there being no price wars and probably behind skimping on supporting components, since they have to still make a profit.

 

Currently the reason behind GK104's high performance is the high clocks it runs at. These high clocks were made possible by almost entirely eliminating 64-bit computing units from it and only leaving 32-bit ones, destroying GPGPU performance. It's the same as with 64-bit CPU, when you run 32-bit programs on it, the other 32 bits stay unused, but still draw some power. Tahiti (7970) is a mostly 64-bit chip, so, while it can run just as high a clock rate (and perform as well or better there), it draws more power doing that.

 

Games tend to use simple 32-bit calculations, except for ones with PhysX (the framerates on GK104 chips fall dramatically with PhysX enabled). Non-game uses, such as video decoding and encoding, OpenCL, CUDA, and GPU computing in general are predominantly and increasingly 64-bit.

 

GK110 is a chip that contains the full amount of 64-bit (better known as double precision) units. It can be sold as a gaming GPU, but that resource allocation indicates primarily professional use, so, again, it might not. I personally need GPU computing, so I have to get cards that can do it.

As for the next series of gaming cards, they'll probably go somewhere in between in terms of 64-bit performance. Like GTX 300 series was just updated 200, then 500 was just updated 400, it's likely that GTX 700 will be this update, but there's no solid information on when and how.

 

 

 

 

And I guess, if I was going to consider AMD again, which ones would be good to go for? The 7950 seems to be the equivalent of my 6950 only newer with higher memory bandwidth another gigabyte of VRAM and higher clocks. Wish AMD cards had the dynamic vsync and TXAA.. But maybe in time they will.

I think you should rather try and fix the Crossfire issues you have now. 2x6950 should be very quick in BF3, there's clearly some problem. SLI isn't effortless either, it has to be turned off in a lot of games. But BF3 is the best at working with multi-GPU setups. If you get it sorted out, there's probably no need to run replacing the GPU system yet. Neither 7000 series nor 600 series are better by a good enough margin to do that.

 

As for TXAA, don't pay much attention to fancy AA modes. MLAA, FXAA, TXAA... they're not real MSAA or SSAA anti-aliasing, which is done by rendering in higher resolution, but rather shaders that selectively blur the sharper edges.

 

Real MSAA looks better in most cases where it can be used, and *XAA can be implemented via software. For instance, Morrowind with MGE-XE supports any kind of shader AA you can possibly think of, few people use it though, since *XAA ruin the fine details that game is full of. In case of TXAA, it has to be implemented via software and it has to be a 600 series card, so I'm not sure if there's any game where it works yet. It's likely there will be an open alternative, but, really, it's not like one is needed that much, it doesn't seem to be anything special. *XAA are mostly workarounds for games with deferred rendering where MSAA causes too much of a framerate hit, or which don't have enough detail for MSAA to lose.

 

 

Dynamic Vsync has been backwards-ported to older Geforces and I've tried playing with it. There may be some use for this feature, but it's not a landslide, so I went back to triple-buffered Vsync.

Generally, regular Vsync with triple buffering (!) performs almost as well as dynamic Vsync, while delivering perfect zero-tearing results, unlike dynamic Vsync that still displays intermittent tearing. In official slides, they compare dynamic Vsync against normal Vsync without triple buffering, which severely underperforms. Still, in a number of cases, where you need a couple fps over triple-buffered Vsync, but not the extra couple from no Vsync at all, I suppose dynamic Vsync can be a useful compromise.

Edited by FMod
Link to comment
Share on other sites

Funny thing, I got to tinkering with it after reinstalling BF3 ( I hadn't done so yet since the new HDD) and it works now or seems too. A bit of stutter here and there but fraps show I seem to average about 78 fps and 125.

 

The only issues I still have with xfire which I can repeat no matter what install I am on are the ones in skyrim. For example, with xfire on and no custom profile in CCC to turn it off for skyrim water flickers when disturbed in skyrim. Thats a driver issue though, an annoying one but a driver issue nonetheless.

 

Anyways Im not serious about upgrading right now, my computer works as it should for the most part. I am however looking for a new card for my dads gaming PC which currently uses a GTX 460 1gb and a 1100T. The GTX 460 isn't enough for him anymore and he wants something bigger. I guess he will have to wait a bit longer and see what comes out.

Link to comment
Share on other sites

I've been disappointed with my AMD cpu. It's the weakest link, causing stutter in every game that stresses my PC, despite the fact that as a x4 @ 3.2Ghz it should be better than my dated graphics card. Meanwhile, my 460 with only 768MB performs above expectations in every area, easily handling texture packs meant for 1GB+ cards, and I get further boosts from every driver update. At this point I'm more inclined to trust Nvidia, after the recent driver catastrophe with AMD's anti-aliasing.

 

Not only that, but I've spoken with 2 people who have 7970's. One of them can't keep it from crashing due to power shortages on a 1000w PSU, while the other reports a disappointing experience compared to his former 2 580s in SLI.

Edited by Rennn
Link to comment
Share on other sites

One of them can't keep it from crashing due to power shortages on a 1000w PSU

Sorry, but any video card that can overload a 1000W PSU is going to turn into a piece of charcoal very quickly.

 

I don't know where these stories got distorted, but they did. Even allowing for very bad Chinese no-name brand PSU, any unit that claims 1000W is at least going to produce 650, while 2500K or 1100T and a 7970 only consume 350W together with all peripherals.

 

Regardless of one's preferences, let's stick to the reality.

 

I'm not rushing to replace 2x580 with a single card either. Only 690 gets any gains over that, while still losing in some regards, and any single-chip card is far slower. Maybe GK110 can match 2x580.

Edited by FMod
Link to comment
Share on other sites

One of them can't keep it from crashing due to power shortages on a 1000w PSU

Sorry, but any video card that can overload a 1000W PSU is going to turn into a piece of charcoal very quickly.

 

I don't know where these stories got distorted, but they did. Even allowing for very bad Chinese no-name brand PSU, any unit that claims 1000W is at least going to produce 650, while 2500K or 1100T and a 7970 only consume 350W together with all peripherals.

 

Regardless of one's preferences, let's stick to the reality.

I was just going to say he should post what the brand of his PSU is..

Link to comment
Share on other sites

I have no idea what his PSU is; the PC kept shutting down with a 7970, and he told me it was because of power consumption with a 1000w PSU. It's possible it's a terrible name, like Diablotek or something. It's possible that he's terrible at diagnosing problems. *shrug* It's possible that the 7970 was pretty much broken, but that's hardly a persuasive argument for AMD. Edited by Rennn
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...