Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Nvidia Pascal GPU has 17 billion transistors
#1
Quote:Exclusive: Pascal has a Killer performance
Pascal technology will have as many as 17 billion transistors under the bonnet, Fuzilla can exclusively reveal. 
Pascal is the successor to the Maxwell Titan X GM200 and we have been tipped off by some reliable sources that it will have  more than a double the number of transisters. The huge increase comes from  Pascal's 16 nm FinFET process and its transistor size is close to two times smaller.
Nvidia and AMD are making their GPUs at TSMC and the Taiwanese foundry has announced 16nm FinFET production runs. Intel and Samsung/ GlobalFoundries call their process 14nm. Our sources told us the branding depends which side of the transistor you look at, the longer or shorter. The size of the gate is almost identical for both 16nm and 14nm process.

Quote:Pascal has 17 billion transistors and it will be significantly smaller silicon than the Maxwell 28nm based GM200.
Nvidia will use second generation HBM for its Pascal GPU to get to a 32GB on the highest end card, This is 2.7 times more than the already impressive 12GB used on Titan X.  The second generation HBM or HBM 2.0 will enable 8Gb per DRAM die, 2Gbps speed per pin and 256 GB per second Bandwidth/ stack.
The first generation offers 2Gb Density per DRAM die, 1Gbps speed per pin, 128 GB/s GB per second Bandwidth and maximum of 4 Hi stack chips with 4GB per HBM card. You saw this with Fiji cards.
HBM2 enables cards with 4 HBM 2.0 cards with 4GB per chip, or four HBM 2.0 cards with 8GB per chips results with 16GB and 32GB respectively. Pascal has power to do both, depending on the SKU.
The GPU looks great but it is coming in 2016, and not before. After Pascal comes the Volta GPU, but that will take a few years.

http://www.fudzilla.com/news/graphics/38...ransistors
Ok with science that the big bang theory requires that fundamental scientific laws do not exist for the first few minutes, but not ok for the creator to defy these laws...  Rolleyes
Reply
#2
I JUST HAVE TO BUY ONE.
Ok with science that the big bang theory requires that fundamental scientific laws do not exist for the first few minutes, but not ok for the creator to defy these laws...  Rolleyes
Reply
#3
ME TOO
Reply
#4
32GB would be overkill for the next 3-4 years???

16GB is probably overkill for 1-2 years, so I'd be happy with that.

This is insane. 8x the memory capacity of HBM1 (8 times the amount per memory chip)!!!

Then the memory speed is doubled as well. The total effective bandwidth would then break the record milestone of 1 TBps! Basically 3x of Titan X's bandwidth.

This is absolutely phenomenal. 4K gaming will be a breeze with a single card, with anything you could throw at it.
Ok with science that the big bang theory requires that fundamental scientific laws do not exist for the first few minutes, but not ok for the creator to defy these laws...  Rolleyes
Reply
#5
I JUST HAVE TO BUY ONE.
Reply
#6
We will see how it turns out. I wouldn't get too excited over a fudzilla article penned by F<r>aud though.
Adam knew he should have bought a PC but Eve fell for the marketing hype.

Homeopathy is what happened when snake oil salesmen discovered that water is cheaper than snake oil.

The reason they call it the American Dream is because you have to be asleep to believe it. -- George Carlin
Reply
#7
I want to get closer to release before I get excited.
Valve hater, Nintendo hater, Microsoft defender, AMD hater, Google Fiber hater, 4K lover, net neutrality lover.
Reply
#8
It wasn't just Fudzilla that was saying this about HBM2 memory in the past. All other rumors pointed to double the bandwidth, and quadruple the capacity, but Fudzilla is the first to say that it could even be 8x the capacity using the same amount of chips.

Remember, when all rumors about HBM1 memory was that it would be 1250 MHz? Then it turned out to be only 1000MHz, and a poor overclocker at that. It was supposed to be like a 5000MHz effective GDDR5 with a 1024-bit bus, as one of the rumors stated. That was a lousy rumor, but it wasn't too far off (1000MHz, with a total of 4096-bit bus width).

The memory bandwidth has really NOT been keeping up with the GPU processing power, like TFLOPs, etc. Of course, with compression and optimizations, etc, it wasn't too bad, but still, just think about how much faster the GPU would be, if it were optimized with super bandwidth in mind, where nearly everything that the GPU needed would be loaded in one shot.
Ok with science that the big bang theory requires that fundamental scientific laws do not exist for the first few minutes, but not ok for the creator to defy these laws...  Rolleyes
Reply
#9
Would be nice but expensive to have super fast video memory.

We need fast system ram way more urgently. Would make a huge difference to PC's.
Adam knew he should have bought a PC but Eve fell for the marketing hype.

Homeopathy is what happened when snake oil salesmen discovered that water is cheaper than snake oil.

The reason they call it the American Dream is because you have to be asleep to believe it. -- George Carlin
Reply
#10
But quad-channel memory isn't making that much of a difference even with Haswell-E vs Haswell with half the bandwidth. At least we're moving to DDR4, and it should ramp up quickly within the next 1-2 years as DDR4 matures. It's still a bit disappointing though, as some rumors a couple years ago were saying that it would be quad-pumped just like GDDR5 (which really should be called QDR instead of DDR). Perhaps this would still come out for system RAM within next 2 years, if system DDR4 is to be as short-lived as GDDR4 was (HD 3870, and just maybe one other video card ever used GDDR4).
Ok with science that the big bang theory requires that fundamental scientific laws do not exist for the first few minutes, but not ok for the creator to defy these laws...  Rolleyes
Reply
#11
It isn't quad channel system memory that is needed.

Most system ram still only has a base clock of 100/200/400 (133/266/533) mhz, then the various DDR permutations slapped on top of that. It is orders of magnitude slower than the cpu's accessing the ram.
Adam knew he should have bought a PC but Eve fell for the marketing hype.

Homeopathy is what happened when snake oil salesmen discovered that water is cheaper than snake oil.

The reason they call it the American Dream is because you have to be asleep to believe it. -- George Carlin
Reply
#12
Hmm, that is interesting about the base clock - it seems that we've been stuck with 100Mhz BCLK forever. This must be why the new edram "L4 Crystal Well cache" is helping Intel's 14nm CPUs so much (like massive L2 cache helping Maxwell overcome most of its relative bandwidth limitation).

Next year, perhaps we'll be seeing cheap DDR4 memory clocked at least 3200MHz "effective".

Then as Intel and AMD stubbornly stick to such low base clock, either one of these companies might go ahead and integrate HBM memory with the CPU, while still allowing for add-on DDR4 expansion.

This is what AMD should've done with their Fury cards, that were limited to only 4GB of HBM1 memory. Just add old-fashioned GDDR5 chips to make it say, 8-12GB or even 16GB, to take a bit of the thunder away from Titan X. It would have been more complicated, but with enough bright engineers, it should have been as "do-able" as the L3 and "L4" caches on Intel's CPUs. The remaining GDDR5 memory would be slower for certain, but Nvidia did that before with GTX 970 and even GTX 550 Ti where a certain portion of memory had lower bandwidth than the rest.

Ideally, every high-end CPU should come with 4-16GB of HBM2 memory while still allowing for QuadDataRate memory capacity of up to 128GB or so. It's like having an integrated GPU with the CPU, making it easier for users to troubleshoot their PC's if their dGPU's go bad - same for memory sticks (if it's suspected to be bad, the PC would still work without the sticks).

Imagine a i5-4690K with 4GB HBM2 , i7-4790K with perhaps 8GB HBM2, and the "E"-variants with up to 16GB. Intel would be more justified with getting gamers to cough up $400+ for a CPU that has more than just the new generation integrated GPU that most of them still won't ever really be using.
Ok with science that the big bang theory requires that fundamental scientific laws do not exist for the first few minutes, but not ok for the creator to defy these laws...  Rolleyes
Reply
#13
There was a recent interview of richard FUDdy where he addressed the 4gb concerns over Fury. He seems to think that the HBM is so fast that in 4K gaming situations AMD can simply swap portions of the memory in from system ram without issue. Seems like a preposterous notion to me given the data has to come out of system ram, be routed through the processor and PCIe then loaded into GPU memory.


https://www.youtube.com/watch?v=NQ8YlXh-...ture=share
Adam knew he should have bought a PC but Eve fell for the marketing hype.

Homeopathy is what happened when snake oil salesmen discovered that water is cheaper than snake oil.

The reason they call it the American Dream is because you have to be asleep to believe it. -- George Carlin
Reply
#14
(07-27-2015, 11:49 AM)gstanford Wrote: It isn't quad channel system memory that is needed.

Most system ram still only has a base clock of 100/200/400 (133/266/533) mhz, then the various DDR permutations slapped on top of that.  It is orders of magnitude slower than the cpu's accessing the ram.

huh?

You say PC needs more ram and that is what is holding it back but then this?

the Memory speed in MHZ matters little at the end of the day. Its how they use that MHZ. The capabilities, the speed of the data it can store and recall.

The individual chips dont have to be running at 3000mhz. Errors are kept at a minimum, data corruption is a real concern.
The doubling of ram capability through DDR allows ram to fouble its throughput, without errors and corruption. It doesnt matter if the chips are running a lower MHZ, the bandwidth is there. The data flow is there and the speed has increases dramatically since the beginning.

Doubling the channel, from dual channel to quad also effectively doubles the bandwidth. The results are real and measurable, you can see clearly in any memory sensitive task or benchmark. Memory is something that can easily be divided up, no problem. When we doubled up CPU cores, there was issues in threading. Memory doesnt have those issues, it can be divided and divided then recalled no problem. It is all the same spread out, i have no idea why you think the ram chips need to run a higher mhz, the overall throughput wont be dramatically different. The fact that you can move from quad to dual channel with very little impact tells me there is very little to be gain by running the individual ram chips faster.

Dual/triple/quad channel and DDRs wont be as good as having ram chips that are running so fast they achieve the same theoretical data transfer. That I gather. But the difference in quad channel to dual has very little real world benefit. The overhead for expanding channels and DDR is not much. It just doesnt add up to me that there will be this huge massive difference if we had faster ram chips. Even with the massively increased throughput from quad channel, which is real and measurable, there is very little to gain in the real world.

Gaming, that is an area that should have direct results with faster ram chips. But you can only gain so much because you are always running up to the bottlenecks elsewhere. After such a drastic change, in the end the gains can only be a few percent. Speeding up your CPU or a faster GPU will drastically improve performance. faster ram chips? I dont see how it can drastically change anything at this point. Almost all apps work well in the way things are today, very few task are data starved and waiting for ram.
Reply
#15
Quote:You say PC needs more ram and that is what is holding it back but then this?
I never said that we need more ram, I said that system ram needs to be faster than it is.
Adam knew he should have bought a PC but Eve fell for the marketing hype.

Homeopathy is what happened when snake oil salesmen discovered that water is cheaper than snake oil.

The reason they call it the American Dream is because you have to be asleep to believe it. -- George Carlin
Reply
#16
RAM and how a CPU accesses it can certainly make a drastic difference to a computer system.

Do you know why a 6502 running at 1 mhz could equal (or better) a z80 running at 4 mhz back in the 8 bit days?
Adam knew he should have bought a PC but Eve fell for the marketing hype.

Homeopathy is what happened when snake oil salesmen discovered that water is cheaper than snake oil.

The reason they call it the American Dream is because you have to be asleep to believe it. -- George Carlin
Reply
#17
Of course.

But what in the heck would you need the ram chips running at a higher speed for? DDR 3 allows for 2,133MTps. If you removed the DDRs, took regular old sdram and super clocked it to 2,133MTps, you would end up with similar data transfer performance at a much higher power consumption on top of introducing stability issues when the errors multiply.
Reply
#18
Having faster RAM clock rates also eliminates the latency the various DDR schemes introduce.

A large part of why the PS3 has lasted as long as it has is because the RDRAM in it runs at 3.2 ghz - the same speed as the Cell CPU. Reliable high speed memory can be designed and manufactured. Shame that no-one other than RamBus has done so thus far.
Adam knew he should have bought a PC but Eve fell for the marketing hype.

Homeopathy is what happened when snake oil salesmen discovered that water is cheaper than snake oil.

The reason they call it the American Dream is because you have to be asleep to believe it. -- George Carlin
Reply
#19
Pascal reportedly has moved to testing phase: http://www.techpowerup.com/216203/nvidia...phase.html
Valve hater, Nintendo hater, Microsoft defender, AMD hater, Google Fiber hater, 4K lover, net neutrality lover.
Reply
#20
17 billion trannies.. ultimate drool!
Reply
#21
http://vrworld.com/2015/11/16/nvidia-unv...bandwidth/
Pascal will launch with 16 GB of VRAM, packing 1 TB/second bandwidth.
Valve hater, Nintendo hater, Microsoft defender, AMD hater, Google Fiber hater, 4K lover, net neutrality lover.
Reply
#22
More new info on Pascal:
http://wccftech.com/nvidia-pascal-volta-gpus-sc15/

100mm^2, 200W, 20 TFLOPs!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

That is - we're not used to a full node shrink.

Plus, with 4x the memory of a GTX 980 or R9 Fury, this is the generation to buy!!!!  All of the next-gen console ports will have no problem running at 4K for like 2 years, with fully or close to fully maxed-out settings!

I'm boycotting this Maxwell generation.  And Fury as well!  Sold off Hawaii after it was no good for bitcoin mining.
Reply
#23
(11-25-2015, 06:33 AM)BoFox Wrote: More new info on Pascal:
http://wccftech.com/nvidia-pascal-volta-gpus-sc15/

100mm^2, 200W, 20 TFLOPs!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

That is - we're not used to a full node shrink.

Plus, with 4x the memory of a GTX 980 or R9 Fury, this is the generation to buy!!!!  All of the next-gen console ports will have no problem running at 4K for like 2 years, with fully or close to fully maxed-out settings!

I'm boycotting this Maxwell generation.  And Fury as well!  Sold off Hawaii after it was no good for bitcoin mining.

You NVIDIOT shill, these specs are a big "Meh!". Rolleyes

NVIDIA will be going out of business, because AMD has all three consoles, and intel has teh laptopz!

111!!!111111!!!!!
Reply
#24
Quote:100mm^2, 200W, 20 TFLOPs!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Heh, I hope you understand that isn't what they are saying at all Smile

That was to illustrate that simply using FLOPS can be misleading because they could just make a chip that had nothing but FP units on it tiny and consume, relatively speaking, a miniscule amount of power if all they were looking for was a raw FLOPS number.
Reply
#25
So basically they are scared that AMD will be selling counter parts with a higher flop rating.
Reply
#26
(11-25-2015, 10:11 PM)ocre Wrote: So basically they are scared that AMD will be selling counter parts with a higher flop rating.

They are scared of AMDs ninjas, NVIDIA has no 40 foot tall pirate skeleton to protect them.

I think we've reached the point it probably doesn't matter what AMD does, the brand has lost favor due to various factors.
Reply
#27
NOOOO!!! Nvidia is gonna do it, trust me!!! A chip smaller than HD 3870 on 55nm, with 200W forced through that tiny mofo, gives us 3x the TFLOPS output of a TitanX - while matched with 1.2TBPS HBM bandwidth! Tongue

Then Volta would suck with just a tiny bit more DP FLOPS output, and still not yet be matched with 1.5x efficient HBM2 memory.... we'd just be seeing smaller and smaller gains from Nvidia after that, just like with Intel after the first-gen Core i7 still being wicked fast for today's games (beating anything that AMD has today - even their 5GHz water-cooled Piledriver).

I'm just hoping that Pascal isn't the last massive GPU hurdle that we'd be seeing for a while, with 5% gain in GPU "IPC" per generation after that, despite subsequent node shrinks a-la Intel.......... don't scare me Nvidia!!! AMD would need to go out of business for you to get lazy like Intel!
Reply
#28
See what I mean:

[Image: NVIDIA-We-are-here.jpg]

Meaning - with Pascal, we're seeing 7-14x the GFLOPs/W over what Nvidia had in 2013 (Kepler, although it was still an extremely dumbed-down architecture, DP-wise).  But then fast forward 6 years later to 2023, and we're seeing only maybe 2-3x increase. 

[Image: NVIDIA-Pascal-GPU_Roadmap.jpg]

That is, about 70% increase for Volta (per watt), along with 7 DP TFLOPs compared to Pascal's 4 DP TFLOPs.  SGEMM is Single Precision General Matrix Multiplication - so it's more meaningful for games, at least when it comes to power efficiency (without assuming that the increase in DP FLOPs output also means that the chip is getting upwards of 70% increase in SP FLOPs output or even overall efficiency.  Still, Volta will sport only 1.2TB/s bandwidth HBM2 compared to Pascal's 1TB/s. 

Nonetheless, with the first image showing DP performance per watt, whatever comes out in 2023 would probably be as low as 1.5x as efficient as Volta in 2018, assuming that Volta is receiving a similar gain in DP efficiency as with SP efficiency over Pascal. 

This is scary, and I'm sure NV would have to work extra hard at ensuring that there is a new memory architecture to replace the projected 120W consumption needed for HBM2 memory with 2TB/s bandwidth (or 160W 1.5x efficient HBM2 with 4TB/s bandwidth - not expected in time for Volta after Pascal).
Reply
#29
(11-26-2015, 05:44 AM)BoFox Wrote: NOOOO!!!  Nvidia is gonna do it, trust me!!!  A chip smaller than HD 3870 on 55nm, with 200W forced through that tiny mofo, gives us 3x the TFLOPS output of a TitanX - while matched with 1.2TBPS HBM bandwidth!  Tongue

Then Volta would suck with just a tiny bit more DP FLOPS output, and still not yet be matched with 1.5x efficient HBM2 memory....  we'd just be seeing smaller and smaller gains from Nvidia after that, just like with Intel after the first-gen Core i7 still being wicked fast for today's games (beating anything that AMD has today - even their 5GHz water-cooled Piledriver).  

I'm just hoping that Pascal isn't the last massive GPU hurdle that we'd be seeing for a while, with 5% gain in GPU "IPC" per generation after that, despite subsequent node shrinks a-la Intel..........   don't scare me Nvidia!!!  AMD would need to go out of business for you to get lazy like Intel!

NVIDIA can't get lazy like intel, even if they wanted to.

Every PC on the globe needs a CPU, and no one wants to burn their hands or cause the lights to dim when they turn on their laptop or desktop. (respectively) So intel can basically start re-badging 5 year old 2600Ks and we'd all buy them- not like we have any choice.

Gaming GPUs are a whole different story.

To keep the doors open NVIDIA has sell you a new GPU every couple years that offers some decent improvements or they go bankrupt. If they put a generation out with a 5% improvement, we all keep the last gen. If they try to jack prices, we all just keep last or prior gen- not like we HAVE game at 4K. (or even 1600p)
Reply
#30
You have a valid point Rollo however you're not pointing out the fact that the progress would be much faster with proper competition from AMD. We are lucky that they have been duking it out for so long. If AMD ever goes down, though, which it looks like they will, I fully expect nVidia to slow down their development just because they can afford to.
Reply
#31
Quote:We are lucky that they have been duking it out for so long.

Are we? If you look at the most dominant parts of the past 15 years- GeForce DDR, Radeon 9700 Pro, GeForce 8800GT- they had *no* competition, were priced in a bracket we consider mid tier now and offered extreme longevity to gamers. Saying inflation over time doesn't really apply, SLI V2 was almost $700 back in the day years prior to the GeForce.

Quote:If AMD ever goes down, though, which it looks like they will, I fully expect nVidia to slow down their development just because they can afford to.

That is working under the assumption that AMD really impacts nVidia at all, which quite frankly I find extremely unlikely from an R&D perspective. Right now, 80% of the PC gaming market is nVidia- if they clean AMD's clock the biggest bump they can hope to get is a 25% increase over what they have now. Keeping their own customers upgrading is how they make their money. nVidia's biggest competitor at this point is the parts they have already shipped. That is the reality of the situation.

AMD going down wouldn't alter much from nVidia at this point. If they were splitting the market 50/50 or hell even 65/35 they would be a factor, as of now nVidia could qualify as a monopoly if the FTC considered the tiny sub segment of a market that is add in GPUs a valid one for regulation.
Reply
#32
(11-26-2015, 07:18 AM)SickBeast Wrote: You have a valid point Rollo however you're not pointing out the fact that the progress would be much faster with proper competition from AMD.  We are lucky that they have been duking it out for so long.  If AMD ever goes down, though, which it looks like they will, I fully expect nVidia to slow down their development just because they can afford to.

I'll add this to what Ben said:

I don't think NVIDIA can afford to slow down development for the reasons stated. They exist to make profit for the stockholders and if they start dribbling out parts not as many people are willing to upgrade to, or stretch the time between upgrades, stock holders will call for changes, people will lose jobs.

What your saying "could" be true if NVIDIA was in the position of having a somewhat cut down part like the 980 and a full part like the 980Ti/Titan X done at the same time. They could then stagger releases to add development time while maintaining sales volume and profits. (and this would not be possible if AMD had the FuryX released before the 980) So in that sense, releases could be slowed a bit. (but a 980 is almost the same performance as two GTX680s, and over 20% faster than a 780, so a nice upgrade over those cards)

The other thing about it is I've never bought the "competition drives innovation" in this industry. These parts are basically inventions that take a long time to bring to market. You can't tell inventors,"Invent faster! The competition has beat us!" That's why AMD owned the CPU performance for a couple years while intel scrambled. (even though intel had more money and staff by far)

Theoretically someone from Power VR could invent something that we all buy for the next three years, maybe forever.
Reply
#33
^^^ Ben and Rollo, the same could be said for Intel but Intel is still showing such strong sales even with just ~5% performance gain PER YEAR ever since Sandy Bridge.

Of course I don't expect NV to diminish their gains to just 5% per year, but heck, it could be half of what we're used to seeing now (or less). Volta might still come out ahead of Pascal by just as much as Maxwell did over Kepler, but then there might be yet another generation on 14/16nm. Look at Intel forfeiting the tick-tock cadence with Kaby Lake - which looks to be so unimpressive thus far - likely much less than Skylake was over Haswell. But wait a minute, I thought Intel really wanted to sell as many of their CPUs as possible?

I'm not fully buying Nvidia's share of bleak outlook - it could just be associated with HBM architecture - with 1.5x efficient being the most that could be gained within a couple years or so. Certainly, NV is working hard at developing a new memory architecture that could change everything 5 years from now. If nothing completely revolutionary, it should at least have to do with gobs and gobs of L4 eDRAM cache like what Intel has been doing with Iris Pro graphics.....
Reply
#34
(11-27-2015, 01:41 PM)BoFox Wrote: ^^^  Ben and Rollo, the same could be said for Intel but Intel is still showing such strong sales even with just ~5% performance gain PER YEAR ever since Sandy Bridge.  

Of course I don't expect NV to diminish their gains to just 5% per year, but heck, it could be half of what we're used to seeing now (or less).    But wait a minute, I thought Intel really wanted to sell as many of their CPUs as possible?  

What I said above applies here.

EVERY PC needs a CPU, and they're all intel. A small percentage of PCs need NV gaming GPUs, and if they don't bump performance we'll just keep the old ones.
Reply
#35
Pascal cards spotted: http://wccftech.com/four-nvidia-pascal-g...s-spotted/
Valve hater, Nintendo hater, Microsoft defender, AMD hater, Google Fiber hater, 4K lover, net neutrality lover.
Reply
#36
I JUST HAVE TO BUY ONE.
Reply
#37
2 months from now?  Is that a good estimate?  Didn't know HBM2 memory would be ready already...
Reply
#38
(02-16-2016, 02:22 PM)BoFox Wrote: 2 months from now?  Is that a good estimate?  Didn't know HBM2 memory would be ready already...

JUST BUY ONE. YOU JUST HAVE TO BUY ONE.
Reply
#39
(02-17-2016, 04:27 AM)SickBeast Wrote:
(02-16-2016, 02:22 PM)BoFox Wrote: 2 months from now?  Is that a good estimate?  Didn't know HBM2 memory would be ready already...

JUST BUY ONE.  YOU JUST HAVE TO BUY ONE.

Eh, maybe. No one here is computer gaming these days, doubt I'll jump.
Reply
#40
(02-17-2016, 06:52 AM)RolloTheGreat Wrote:
(02-17-2016, 04:27 AM)SickBeast Wrote:
(02-16-2016, 02:22 PM)BoFox Wrote: 2 months from now?  Is that a good estimate?  Didn't know HBM2 memory would be ready already...

JUST BUY ONE.  YOU JUST HAVE TO BUY ONE.

Eh, maybe. No one here is computer gaming these days, doubt I'll jump.

YOU JUST HAVE TO JOIN THE FOCUS GROUP.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)