Author |
Message |
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Welcome to the Generalized Fermat Prime Search
This project is LIVE! Please see the second post for previously important Beta information.
This search is for primes of the form b^2^n+1. The numbers F(b,n) = b^2^n+1 (with n and b integers, b greater than one) are called generalized Fermat numbers. In the special case where b=2, they are called Fermat numbers, named after Pierre de Fermat who first studied them.
The original Generalized Fermat Prime Search by Yves Gallot was very active from 2001-2004. It was a premier project ranking second only to GIMPS in organization and size of primes found. In 2009, PrimeGrid, through its PRPNet, revitalized the search thanks in large part to David Underbakke, Mark Rodenkirch, and Shoichiro Yamada, each of whom provided the necessary software updates to get the project moving again. Now with Michael Goetz's native BOINC port of a modified GeneferCUDA, it's time to AWAKEN the potential of this project.
The search will concentrate on N=4194304 (n=22) which, if successful, has the potential of discovering the world's largest known prime number. Due to the size of the work units at this N, this search is a GPU-only project. WUs will get longer with time, but a rough estimate is that one WU will take 8 days to crunch on a GTX 460. Values of b up to approximately 490k are within the testing limits of GeneferCUDA. As of January 2012, a prime found with a b >= 1248 would produce a new world record. (FYI, 1248^4194304+1 is composite. It's not going to be that easy!)
GeneferCUDA is currently only available for Windows and Linux clients. GenefX64 (CPU) is available for MacIntel.
An Nvidia GPU with double precision floating point hardware is required for this project. The following GeForce GPUs will work:
- GTX 260 and above (desktop only; mobile 2xx GPUs do not have double precision)
- GTX 4xx
- GTX 5xx
Many Tesla and Quadro GPUs also will work. They must be CC 1.3 or higher. Check Nvidia's documentation for a comprehensive list. GeneferCUDA is particularly sensitive to overclocking, and since this project requires 100% correct operation of the entire GPU over an extended period of time, over clocking is not recommended. If you would prefer to crunch smaller WUs and/or use a CPU for crunching, the PRPNet GFN project is seaching at N values of 32768, 65536, 262144, and 524288.
If you have the resources, the drive, and the desire to be the finder of THE LARGEST KNOWN PRIME, then this is your project.
Best of Luck to everyone!!!
----------------------------------------------------------------------------
A search such as this obviously needs a big sieve effort. If you would like to help out with the manual sieving effort, please see the instructions in the GFN Prime Search Sieving thread. It's available for Windows 64 bit CPU's ONLY...but can be run virtually within Linux.
For more information about generalized Fermat numbers and primes, please visit these links:
For more information about Fermat numbers and primes, please visit these links:
For more information about Pierre de Fermat, please visit these links:
A special thanks to the following people:
- David Underbakke for AthGfn64 (sieve), Genefx64 and updates to Genefer, and Genefer80 used in the PRPNet searches.
- Mark Rodenkirch for PRPNet and his collaboration with David in updating Genefx64, Genefer, and Genefer80 to prepare the programs for a distributed effort.
- Shoichiro Yamada for GeneferCUDA.
- Michael Goetz for the native BOINC build of GeneferCUDA - Windows.
- Iain Bethune for the native BOINC build of GenefX64.
- Ronald Schneider for the native BOINC build of GeneferCUDA - Linux.
- and to Everyone who attempts this search. :)
____________
My lucky number is 75898524288+1
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Special Information Regarding Beta Testing
This project is still in the beta testing phase. There's several differences in the project at this stage, and some additional information you need to know.
- Currently as of 1/22/2012, testing is being done at N=262144. At this size WU, CPU crunching is feasible, and there is a MacIntel CPU client available in addition to the Windows CUDA client.
- There is a problem running GeneferCUDA on GTX 550 TI GPUs.
- The problem is not well understood, and it is unknown if it affects every GTX 500 Ti, or just some of them.
- It's hypothesized that the problem may be related to the unusual "mixed density" memory architecture on this GPU, and if true, that would mean that a small number of other GPUs that also have this architecture might also be affected.
- The problem manifests itself by having WUs fail with "MaxErr Exceeded" errors. This occurs at random times during processing.
- There has been some success with mitigating this problem by lowering the memory clock to 1700 while leaving the core and shader clocks at their normal values.
- The latest software release (1.05, which should be in production soon, or 1.05 beta 3 which you can run now with app_info) has a tuning parameter that you can set from the PrimeGrid preferences web page. Although initially designed as a tool for dealing with potential screen lag problems, it turns out that you may be able to achieve a small performance improvement using this feature. Please see GeneferCUDA Block Size Setting for more information.
Other known problems: [/b]
____________
|
|
|
|
I spy with my little eye GFN badges. Thanks for adding them guys! :)
Now I once again have everything at bronze! :P
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Awesome! I've actually been waiting for badges to pop up. I know, I know, it's silly, but... it's fun.
Question: Why does GFN use the same progression (20K, 200K, 1M, 2M, 4M) as sieves rather than the LLR tasks (10K, 100K, 500K, 1M, 2M)? I would think it would use the same progression as all the other Boinc primality projects.
If the answer is "Because it's a GPU project.", then my next question is why does the TRP Sieve use the same progression as the PPS and GSW sieves?
I don't actually care which it is; I'd just like to understand the thinking behind the rules.
Speaking of badges, I'd like to give a shoutout to the folks over at GPUGRID -- I really like what they did with their new badge system. Besides the badges based on the number of credits you've racked up, they also award badges based on the percentile of your contribution to their published papers. That's a nice touch.
____________
My lucky number is 75898524288+1 |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Question: Why does GFN use the same progression (20K, 200K, 1M, 2M, 4M) as sieves rather than the LLR tasks (10K, 100K, 500K, 1M, 2M)? I would think it would use the same progression as all the other Boinc primality projects.
Highly optimized apps deserve higher goals.
In the past, only the sieves offered optimized apps - namely 64 bit vs 32 bit. The GPU optimization came quite a bit afterwards. The first project to benefit was AP26. It was highly optimized over its 32 bit CPU version. It made sense to apply the "higher" badge system then and it still does today.
Yes, llrCUDA is on the horizon, but certainly not as significant of an optimization over 32 bit as other applications. However, this will definitely muddy the waters...but we're not there yet. ;)
Times have changed considerably since the badges were introduced but the spirit of accomplishment remains the same.
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Ah, thanks for the explanation, John. I never knew the reason that sieves had higher goals until today!
____________
My lucky number is 75898524288+1 |
|
|
|
Can someone give me a rough idea as to current WU runtimes on a stock speed GTX570 and how much credit people are getting per WU at the moment?
Thanks.
____________
35 x 2^3587843+1 is prime! |
|
|
|
Credit is currently 3600 / unit and run times are somwhere between 3300 and 3500 for stock it looks like. Overclocked are quite a bit lower if you get your speeds stable.
____________
@AggieThePew
|
|
|
Crun-chi Volunteer tester
 Send message
Joined: 25 Nov 09 Posts: 3242 ID: 50683 Credit: 151,735,680 RAC: 563
                         
|
Just finished my first GeneferCuda on Geforce 560 TI OC version.
Done in 3994 seconds: give me 3600 credits...
____________
92*10^1585996-1 NEAR-REPDIGIT PRIME :) :) :)
4 * 650^498101-1 CRUS PRIME
2022202116^131072+1 GENERALIZED FERMAT
Proud member of team Aggie The Pew. Go Aggie! |
|
|
|
Just finished my first GeneferCuda on Geforce 560 TI OC version.
Done in 3994 seconds: give me 3600 credits...
Well done C. |
|
|
|
Question: Why does GFN use the same progression (20K, 200K, 1M, 2M, 4M) as sieves rather than the LLR tasks (10K, 100K, 500K, 1M, 2M)? I would think it would use the same progression as all the other Boinc primality projects.
Highly optimized apps deserve higher goals.
In the past, only the sieves offered optimized apps - namely 64 bit vs 32 bit. The GPU optimization came quite a bit afterwards. The first project to benefit was AP26. It was highly optimized over its 32 bit CPU version. It made sense to apply the "higher" badge system then and it still does today.
Yes, llrCUDA is on the horizon, but certainly not as significant of an optimization over 32 bit as other applications. However, this will definitely muddy the waters...but we're not there yet. ;)
Times have changed considerably since the badges were introduced but the spirit of accomplishment remains the same.
If this is the case then how are you calculating credit? If you are comparing optimized apps such as PPS_Sieve for badges you would think to award similar credit per minute, BUT since it is a primality test it shouldn't be that high. I think that 1/2 the credit per minute (or second) of PPS_Sieves would be appropriate. On my OC GTX 460 I finish the PPS _Sieves in about 26 minutes. That's about 130 credits per minute (rounded). Half of that is 65 x 72 minutes (average) a 262144 WU takes, would be 4680. As of now even though I REALLY want to find a GFN I am still drawn to the sieves for better credit for my electric guzzeling space heater(s) (with guzzeling a/c to counteract it/them). Short term for all the testing and occasional runs later on is what I see myself doing, as would many others at current per minute rates. If you REALLY want to draw crunchers to use more of their precious GPUs cycles on what will eventually be like SOBs on a GPU, you need to have a little more incentive. Some don't care at all about credit, some care only about credit and then there are what I think is the majority, people who want to find primes AND get fair credit while doing it. That is where I kind of stand, in the middle. I DON'T want to start a debate, just some insight on what I see a lot of peoples psychy seems to be. When testing progresses to 524288 maybe a per minute recalculation? I know it's just testers for now but it would be a fairer amount even for us 'lowly' testers (and better on our RACs).
Just my .02 cen... no .03 cents.
NeoMetal*
P.S. If I sound like a raving lunatic, disregard the above.
P.P.S. Mike, I'm still testing. Lots better.
____________
Largest Primes to Date:
As Double Checker: SR5 109208*5^1816285+1 Dgts-1,269,534
As Initial Finder: SR5 243944*5^1258576-1 Dgts-879,713
|
|
|
|
I agree, if you want to get a big number of GPU crunchers, the credit will have to be raised. Especially when you realise how long the units will take to finish. You need an incentive to get people of the sieves to these very long LLR's.
But that will probably prove itself when it will be release to the wide public.
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
If this is the case then how are you calculating credit?
Google translation into English: "Gee, I wonder what's inside this box? And who is this Pandora person?"
Sorry, John, I couldn't resist. ;-)
____________
My lucky number is 75898524288+1 |
|
|
|
I agree, if you want to get a big number of GPU crunchers, the credit will have to be raised. Especially when you realise how long the units will take to finish. You need an incentive to get people of the sieves to these very long LLR's.
But that will probably prove itself when it will be release to the wide public.
I think the current rate is selected for the current length of a WU. If the n=4194304 units also get 3600 credits... well, that would be extremely stingy methinks and therefore unlikely :) (450 cr/day, 18.75 cr/hour)
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
|
Port is out of work. Are we fixing to load up a bigger number or change versions. Just wondering. Thanks
____________
@AggieThePew
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
If this is the case then how are you calculating credit?
Google translation into English: "Gee, I wonder what's inside this box? And who is this Pandora person?"
Sorry, John, I couldn't resist. ;-)
We are in BETA, we are in BETA, we are in BETA ad nauseam! :)
____________
|
|
|
|
I wasn't sure but is this project still a beta project? |
|
|
|
I wasn't sure but is this project still a beta project?
So it says in the prefs page. The fact that the project total is not shown on user main pages also points to that beta status... |
|
|
|
And the lack of (pictures including) a huge parade, cheerleaders and several kegs of beer also mean that there hasn't been a release party yet ;)
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
|
I wasn't sure but is this project still a beta project?
So it says in the prefs page. The fact that the project total is not shown on user main pages also points to that beta status...
LOL sorry I was only trying to inject some humor :)
The lack of party balloons and adult beverages is a dead give away!
____________
@AggieThePew
|
|
|
|
I wasn't sure but is this project still a beta project?
So it says in the prefs page. The fact that the project total is not shown on user main pages also points to that beta status...
Zoooooooommmmm....
What's that? Oh that's the low flying humour plane passing overhead!
____________
Twitter: IainBethune
Proud member of team "Aggie The Pew". Go Aggie!
3073428256125*2^1290000-1 is Prime! |
|
|
|
I know it's still beta but I was noticing that there are many lower end to middle speed cards testing but only 2 570s and one 580 and no 590s unless I missed someone in the threads. All the fastest cards are still on PPS_Sieves and I would think you would want more top cards beta testing so a better incentive might be needed. The current credit is not drawing enough of the big guns in as of now, especially someone with a GTX 590 (or a pair) as there may be unknown issues with those dual chip cards. My example of the different credit for the current n=262144 was merely a suggestion and I know as n increases so will the credit but I was just thinking that getting a baseline ''credits per minute/second/GFlops'' number that will be the production number determined 'now', that then can be used to extrapolate on bigger n's. By doing that now you may draw more fast cards to beta testing. This is all assuming that more fast cards are wanted/needed for testing and eventually longer termed crunching through a higher credit incentive. And as Mike said in another thread that doubling n doesn't translate into doubling WU time. That it's more than double so just doubling 3600 or whatever number when testing goes to 524288 won't work. Any other suggestions that may be helpful? Or is this all a cart before the horse thing?
The question of how credit is determined to begin with, was a serious but curious one. Although, if Pandora's box is how it's done than John or Rytis (or both) must know Pandora pretty well, what with the different sub projects and all. ;)
NeoMetal*
____________
Largest Primes to Date:
As Double Checker: SR5 109208*5^1816285+1 Dgts-1,269,534
As Initial Finder: SR5 243944*5^1258576-1 Dgts-879,713
|
|
|
|
I know it's still beta but I was noticing that there are many lower end to middle speed cards testing but only 2 570s and one 580 and no 590s unless I missed someone in the threads. All the fastest cards are still on PPS_Sieves and I would think you would want more top cards beta testing so a better incentive might be needed. The current credit is not drawing enough of the big guns in as of now, especially someone with a GTX 590 (or a pair) as there may be unknown issues with those dual chip cards.
Here are my 2 cents:
For some reason, slower cards are more efficient on GeneferCUDA than the faster ones: slower (or smaller, if you prefer) are much closer to the bigger ones than they are in the sieves. For instance, I ran some Genefertasks on a 590 less than twice faster than on a 550 TI. When sieving, the first would be at least three times faster. The double precision thing is the cause for this, I think. This could help to explain the lack of high end cards.
GFN262144 tasks in PRPNet worth ~22000 PRPnet credits. At the current implicit conversion ratio (20:1), a task done there will grant you 1100 Boinc credits in the PSA subproject. Under this perspective, crunching genefers in Boinc is over three times more rewarding than doing it in PRPNet.
We could not find primes without sieving. But sieving, unlike Genefer, will never give you the pleasure of saying "I was the one that found this prime" (actually, your cpu did, but ok), so it is fair that sievers are rewarded more generously on the credits side. I would gladly trade my 7 million or so sieving credits and badges for a top 5000 prime :)
|
|
|
|
I asked to run this project and as was stated earlier it started out with no credits and then 1 credit. As it's still in beta, questions on the credit rating would be valid to bring up so thought can be given. I'm pretty sure that it will even itself out sooner than later since PG is a very well run operation.
For me, it's more about finding a big prime even though I do like credits.
On a side note for those of you interested (Tim pointed this out) upgrading to the 951.51 beta nvidia driver speeds up the jobs. I've gone from running a job around 2440 seconds to 2412 seconds.
edit: wasn't sure if the option for genefer was available to everyone or if they still needed to ask to run it.
____________
@AggieThePew
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
There's two sides to this credit debate. Well, lots of sides, but consider this counter argument to "You need more credit to draw people off the sieves".
Mind you, this counter-argument is NOT good for PrimeGrid, but it does have a certain appeal for the people who are reading this board.
One of us is possibly going to be the world record holder in the not so distant future. This person may end up holding that record for more than a few years.
Would you rather that person be someone who is interested in finding primes, or someone whose only interest was in getting to the top of the credit leader board?
Now, of course, the goal is for SOMEONE to find it, and the odds of that happening goes up with more people crunching. And do not forget that this IS a race and speed is very important: I'm not certain where GIMPS is searching right now, but if they find another Mersenne before we find a GFN at N=4194304, their new prime may be above what we can find at that N. So we have a window of opportunity here, and that window might close.
So, that's my counter-argument AND my counter-counter-argument. :)
____________
My lucky number is 75898524288+1 |
|
|
|
How do I go about running the genefercuda part, I've tried connecting to the servers but it says something along the lines of no work is available, maybe I've configured it totally wrongly but it appeared to be working for the other parts I tried, PPS & SGS.
It also prompts me that cudart_32_32_16.dll is missing, should I just download this and retry? |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
How do I go about running the genefercuda part, I've tried connecting to the servers but it says something along the lines of no work is available, maybe I've configured it totally wrongly but it appeared to be working for the other parts I tried, PPS & SGS.
It also prompts me that cudart_32_32_16.dll is missing, should I just download this and retry?
At this moment, there appears to be no work on the server. I don't know what the story is with that.
This is a BOINC project, and BOINC should download everything automatically for you. If you're running with an app_info file, you'll need to update it appropriately.
____________
My lucky number is 75898524288+1 |
|
|
|
Apologies, I was trying via the prpclient-5.0.4 command line, in BOINC I don't see the option to subscribe, probably because I haven't requested it.
How do I request to get it added or do I just add the lines to the app info file, and if so what would the lines be?
Cheers
|
|
|
|
Apologies, I was trying via the prpclient-5.0.4 command line, in BOINC I don't see the option to subscribe, probably because I haven't requested it.
How do I request to get it added or do I just add the lines to the app info file, and if so what would the lines be?
Cheers
PM John requesting to be part of the testing :)
____________
@AggieThePew
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Apologies, I was trying via the prpclient-5.0.4 command line, in BOINC I don't see the option to subscribe, probably because I haven't requested it.
How do I request to get it added or do I just add the lines to the app info file, and if so what would the lines be?
Cheers
You have to PM John to get added to the beta, although I suspect the beta will be going open in the near future.
To run this code with PRPNet (as opposed to the version that comes with PRPNet), download the executable file here. That's the latest version which will be running on Boinc in the near future.
Place that in the directory you want to run it in. If you're going to run it under PRPNet, rename it to GeneferCUDA.exe.
If you need the cuda3.2 DLLs, you can download them from the PrimeGrid download directory. That directory listing is alphabetical, so just scroll down until you get to "cu...". They should go in the same directory where you put the executable.
____________
My lucky number is 75898524288+1 |
|
|
|
Brilliant, advice taken, PM sent, files downloaded, tested what I could and appears to be working, just need acceptance now.
Thanks for the pointers :) |
|
|
|
Does the due date have to be so short (~24 hours)? If your queue size is set to anything greater than 1 day, it over-downloads. (Yes, I already checked my DCF, it is currently 1.1.)
____________
Reno, NV
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Does the due date have to be so short (~24 hours)? If your queue size is set to anything greater than 1 day, it over-downloads. (Yes, I already checked my DCF, it is currently 1.1.)
Sorry, this is still beta testing for now. Once the project goes live, the deadline will be adjusted accordingly.
____________
|
|
|
|
Does the due date have to be so short (~24 hours)? If your queue size is set to anything greater than 1 day, it over-downloads. (Yes, I already checked my DCF, it is currently 1.1.)
Sorry, this is still beta testing for now. Once the project goes live, the deadline will be adjusted accordingly.
Fair enough.
____________
Reno, NV
|
|
|
|
Any chance for an OSX CUDA app?
____________
Reno, NV
|
|
|
|
are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?
____________
|
|
|
|
GeneferCUDA is sensitive not only to overclocking, but also to temperature under overclocking. My card receives an error more than ~82C, but at 75C WU run stable. link to image |
|
|
|
are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?
I'll second that :)
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
GeneferCUDA is sensitive not only to overclocking, but also to temperature under overclocking. My card receives an error more than ~82C, but at 75C WU run stable. link to image
You have no idea how useful that information is. Thank you!!!
We've known since the beginning that Genefer is very sensitive to overclocking, but it could have been either a hardware failures or a software timing problem.
If temperature affects the results, then it's not a software problem. The sensitivity to overclocking is definitely a hardware issue.
One mystery solved.
Again, thanks.
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?
The project's still in beta. Slowly, they've been adding in all the webserver parts:
- The badges are in.
- The stat export is in, so GFN shows up on the free-DC sub-project stats.
- The application page lists the Genefer applications.
- EVERYTHING necessary to make GFN actually work! :)
I think this is the only remaining part to go, and it will be done.
____________
My lucky number is 75898524288+1
|
|
|
|
are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?
The project's still in beta. Slowly, they've been adding in all the webserver parts:
- The badges are in.
- The stat export is in, so GFN shows up on the free-DC sub-project stats.
- The application page lists the Genefer applications.
- EVERYTHING necessary to make GFN actually work! :)
I think this is the only remaining part to go, and it will be done.
In our admins we trust! :)
As always, take your time. I'd rather have an app that works (damn near) perfectly without stats than stats and a borked app that wastes my C/GPU time! :)
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
As always, take your time. I'd rather have an app that works (damn near) perfectly without stats than stats and a borked app that wastes my C/GPU time! :)
You should be able to get both, because the admins putting the server together and the people doing the de-borking are different people, so we can work in parallel! :)
____________
My lucky number is 75898524288+1 |
|
|
|
If temperature affects the results, then it's not a software problem. The sensitivity to overclocking is definitely a hardware issue.
This regularity observe in folding@home and in gpugrid. Also, this projects sensitive to overclocking the memory (incompatibility bond frequency/timing set). For example, first WU heats the card and get error, next WU get almost immediately error. link to image |
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2408 ID: 1178 Credit: 19,577,748,615 RAC: 11,711,407
                                                
|
If temperature affects the results, then it's not a software problem. The sensitivity to overclocking is definitely a hardware issue.
One mystery solved.
Again, thanks.
Actually, it isn't really a "hardware" issue either, since the same hardware at the same overclock would likely function stably in another environment with lower temps. The hardware part comes into play in that there is no set temp at which each chip/card combo becomes unstable. For example, I have one card running quite stably at exactly 82C.
Given that I have several different cards of various speeds/shaders/memory/etc., I'd offer the following guidelines when it comes to overclocking on the GFN subproject (*note: much of this advice applies to some degree to the other GPU subprojects as well as to overclocking in general, and as always, YMMV):
1) Keep it under 80C...while I have had some success with cards that run hotter than this (even without an overclock), generally speaking, it is once one exceeds the 80C range that stability issues start emerging in my experience. Some cards/chips can exceed this, but if you are running that hot, you need to keep a very close eye on things (and be prepared to downclock or buy another card eventually...excessive heat will shorten the card's lifespan).
2) Is that a Laptop GPU you are overclocking?...Don't do it! Frankly, it is not the best idea to run such work on a laptop, and many would advise against it generally. I have been successfully running CPU and GPU work on them for years, however, without issue. I do by taking all the necessary precautions of running in a relatively cool environment with a laptop cooler (with active fans) and a regular cleaning schedule of canned air at least every month (including less frequent opened case cleanings). Even with those precautions, I would not overclock anything. Doing so risks running into other issues (see #1 above).
3) Go up in small increments when overclocking. For the most part, the shader and core clocks are linked for GFN work (the exception is for GTX 2xx cards and related Quadro/Tesla cards built on these chips). When you find a clock that is unstable, back off a couple of overclock steps and you should be fine (assuming #1 above isn't an issue).
4) The GFN project appears to be particularly sensitive to memory overclocks. These don't gain you greatly in overall speed increases anyway, so I wouldn't recommend playing with them at all. Indeed, if you are not overheating and haven't done much to the shader clocks but are still getting stability issues, you might consider downclocking the memory (similarly to the workaround for the GTX 550 Ti cards).
5) Remember that overclocking necessarily uses more power. This in turn puts more stress on your power supply, thereby increasing overall heat. This leads to several possible issues including A) over stressing the power supply resulting in shortened PS life span, possible shorts in other parts of your system such as the motherboard, and non-GPU instability; B) hotter running GPUs and CPUs (I have reduced heat in some systems just by installing a more powerful/efficient PS); and C) problems with GPUs that do not have extra power connectors (i.e., the PCIe slot is limited to 75W...overclocking some cards without external power can exceed this limit, and running at the limit can produce instability in some systems).
6) That GT530 you bought is never going to be a GTX 560 Ti (or fill in whatever card comparison you like). That is, you are not going to make a mid-range card out of an entry-level card nor are you going to make a top-end one starting with a mid-range. You may find that you can take a card from a particular series and overclock it successfully to perform at or near the stock clocked card from the next higher series (e.g., I have my wife's superclocked EVGA GTX 550 Ti performing about as well as a stock clocked GTX 460), but you are not going to be able to do better than that 99% of the time (*note: there may be rare exceptions). If you really want a top-end card, buy one...you aren't going to overclock your way there with something else.
Overclocking can be a very useful tool to increase performance for your equipment. This can be a Win-Win situation where the user gets more credit and increases their chance of finding a prime and where the project benefits from the overall increased work completion. But it is a Lose-Lose if you don't do it very carefully and responsibly.
____________
141941*2^4299438-1 is prime!
|
|
|
|
Problems due to memory occur when used incompatibility bonding set frequency and timing. That is, needed to use higher timings with increasing frequency. Need to look in IC datasheet. |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?
The project's still in beta. Slowly, they've been adding in all the webserver parts:
- The badges are in.
- The stat export is in, so GFN shows up on the free-DC sub-project stats.
- The application page lists the Genefer applications.
- EVERYTHING necessary to make GFN actually work! :)
I think this is the only remaining part to go, and it will be done.
Rytis added GFN stats to account page. :)
____________
|
|
|
|
Rytis added GFN stats to account page. :)
Very nice! and thanks to ALL of you who made this possible.
____________
@AggieThePew
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?
The project's still in beta. Slowly, they've been adding in all the webserver parts:
- The badges are in.
- The stat export is in, so GFN shows up on the free-DC sub-project stats.
- The application page lists the Genefer applications.
- EVERYTHING necessary to make GFN actually work! :)
I think this is the only remaining part to go, and it will be done.
Rytis added GFN stats to account page. :)
I just realized I forgot one (make that two, um, three...) item(s) from my list:
The first two I forgot because those pages are currently disabled for the challenge. They may, in fact, already be done.
The third is something out of sight, and it may probably already done.
____________
My lucky number is 75898524288+1
|
|
|
|
Thank you Rytis and everyone else involved with this sub-project. You're doing a great job! Know that the (mostly) silent majority really appreciate what you do.
____________
|
|
|
|
I think you are all doing a wonderful job but I have a question.
I am never one to complain about getting too many wus LOL but I am getting way to many wus to do in the time given to do them, so I changed my computing preferences for 1 days of work and it is not working. I got 49 wus to do by 8:17:29 tomorrow night. I have lots of others to do before that.
My GPU is a GTX560, I am averaging around 20 wus a day and I have Vista if that helps.
Is there anyway of controlling the amount of wus I am getting? I really hate to abort them because I can't do them in time.
Thank you for your time in answering ~smiles~ |
|
|
|
I think you are all doing a wonderful job but I have a question.
I am never one to complain about getting too many wus LOL but I am getting way to many wus to do in the time given to do them, so I changed my computing preferences for 1 days of work and it is not working. I got 49 wus to do by 8:17:29 tomorrow night. I have lots of others to do before that.
My GPU is a GTX560, I am averaging around 20 wus a day and I have Vista if that helps.
Is there anyway of controlling the amount of wus I am getting? I really hate to abort them because I can't do them in time.
Thank you for your time in answering ~smiles~
You can reduce the additional buffer in Boinc preferences. Setting it to 0 (or 0,1, for instance) would do the trick (assuming you have a permanent connection to the server).
You could also change DCF (duration correction factor) in the file client_state.xml inside primegrid folder. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
I think you are all doing a wonderful job but I have a question.
I am never one to complain about getting too many wus LOL but I am getting way to many wus to do in the time given to do them, so I changed my computing preferences for 1 days of work and it is not working. I got 49 wus to do by 8:17:29 tomorrow night. I have lots of others to do before that.
My GPU is a GTX560, I am averaging around 20 wus a day and I have Vista if that helps.
Is there anyway of controlling the amount of wus I am getting? I really hate to abort them because I can't do them in time.
Thank you for your time in answering ~smiles~
Since this is still in beta, they have the deadlines set to 24 hours. You need to have your buffer set to less than that or you will have problems. Once we go up to a larger N they will most likely raise the deadline, if for no other reason than the WUs will be substantially longer. (We're currently at n=18. At n=19, the WUs are about 3 times longer, and at n=20, they're about 12 times longer. Our eventual target is n=22, where the WUs are 130 to 250 times longer.)
I think we're ready to move up to n=20 -- and the deadlines will have to be longer for that. With longer deadlines, you can make your buffers larger and Boinc will be able to handle it.
____________
My lucky number is 75898524288+1 |
|
|
Crun-chi Volunteer tester
 Send message
Joined: 25 Nov 09 Posts: 3242 ID: 50683 Credit: 151,735,680 RAC: 563
                         
|
And if( for now) WU on my card is need 58 Min to be finished, and I got 3600 credits for one: how many will I get for WU that will be few times longer?
____________
92*10^1585996-1 NEAR-REPDIGIT PRIME :) :) :)
4 * 650^498101-1 CRUS PRIME
2022202116^131072+1 GENERALIZED FERMAT
Proud member of team Aggie The Pew. Go Aggie! |
|
|
|
(We're currently at n=18. At n=19, the WUs are about 3 times longer, and at n=20, they're about 12 times longer. Our eventual target is n=22, where the WUs are 130 to 250 times longer.)
I think we're ready to move up to n=20 -- and the deadlines will have to be longer for that. With longer deadlines, you can make your buffers larger and Boinc will be able to handle it.
Are we just going to jump right over n=19 or do you just mean that the project is debugged enough to be able to move to n=20?
____________
@AggieThePew
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
(We're currently at n=18. At n=19, the WUs are about 3 times longer, and at n=20, they're about 12 times longer. Our eventual target is n=22, where the WUs are 130 to 250 times longer.)
I think we're ready to move up to n=20 -- and the deadlines will have to be longer for that. With longer deadlines, you can make your buffers larger and Boinc will be able to handle it.
Are we just going to jump right over n=19 or do you just mean that the project is debugged enough to be able to move to n=20?
It's not my choice, but my opinion would be that there's no reason to do n=19 on the Boinc side since it's been crunched for over a year at that level on PRPNet.
N=18 made sense as a stepping stone because A) the WUs are actually useful, and B) they're not trivially short the way the earlier WUs were. But now that we have some experience running at decent lengths, I think the next step should be to an "n" that hasn't been searched before. As a beta test, I don't think there's anything to learn by doing n=19 here.
Perhaps we go straight to n=22, or maybe we crunch n=20 and then n=21, perhaps hoping to find a prime at those levels before moving on. I don't think any decision has been made.
There is a reason to stay at n=18, however: with all the GPU power available on the Boinc side, there's a possibility for finding some nice primes during the Tour de Primes. The winner of the green jersey this year, in my opinion, will probably be someone using a GPU crunching GFN at n=18.
____________
My lucky number is 75898524288+1 |
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1962 ID: 352 Credit: 6,336,802,620 RAC: 2,351,903
                                      
|
There is a reason to stay at n=18, however: with all the GPU power available on the Boinc side, there's a possibility for finding some nice primes during the Tour de Primes. The winner of the green jersey this year, in my opinion, will probably be someone using a GPU crunching GFN at n=18.
Yes, finding prime even at n=18 would be nice.
There is another reason to stay at n=18 for a while longer...we are not quite finished with sieving for n=20,21,22.
Have anyone seen some probability calculation of how many primes can we expect at each n?
Maybe putting lower n's primes in a table may give us idea of what to expect.
Primes at higher n's are rare...but how rare?
____________
My stats |
|
|
|
There is a reason to stay at n=18, however: with all the GPU power available on the Boinc side, there's a possibility for finding some nice primes during the Tour de Primes. The winner of the green jersey this year, in my opinion, will probably be someone using a GPU crunching GFN at n=18.
Unless someone has a finding at n=19 on PRPNEt side as you did before :)
On the debugging process, looking at my wingman results, I keep seeing a lot of tasks aborted (probably due to DCF issues), but also a high rate of "errors while computing" and "marked as invalid". This may be acceptable with tasks that take less than two hours on a low end card, but might mean a big waste of crunching power at higher n.
The positive side is that most "errors while computing" seem to happen at the beginning (looking at reported times) and are mush more frequent than invalid tasks (by other causes). My sample is small though. Not sure if it represents the whole Genefer-Boin universe. |
|
|
|
There are alot of invalid and errored workunits. Some of the users are running a mix of sieves and genefer and I'm guessing they have o/c'd cards for the sieves which causes issues with GC as we know. Almost every one of my units seems to have been run 3-5 times but then we are still in beta mode so I kind of expect this. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
There is another reason to stay at n=18 for a while longer...we are not quite finished with sieving for n=20,21,22.
We're not even close to being finished with the sieving -- but sieving is a "diminishing returns" endeavor; we have probably already eliminated about 95% of the total candidates that would be removed if the sieve was complete. For example, we're roughly 1/8th of the way through the n=22 sieve, and have eliminated about half of the 50 million candidates in the sieve file. For the remaining 7/8 ths of the sieve, a paper napkin calculation shows us as eliminating maybe 1 million more candidates. 88% of the sieving remains, but it's only going to find 4% of the factors.
In other words, if we start crunching now, we're running at only 96% efficiency compared to if we waited. Of course, while we're waiting, we're running at 0% efficiency. At the speed the sieve is progressing, it looks to be a long wait for that last 4%.
Is it worth sieving? Yes, because it's still more efficient at removing candidates. But it's not, in my opinion, a reason not to start crunching for PRP. I'll probably start crunching at N=22 on my own if we're not at least at n=20 by the end of the month.
At n=20 and n=21 we're not as far along as we are at n=22, but because of the diminishing returns, in terms of factors, the equations are fairly similar.
As for "how many primes..." I'll leave that to others to answer.
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
On the debugging process, looking at my wingman results, I keep seeing a lot of tasks aborted (probably due to DCF issues), but also a high rate of "errors while computing" and "marked as invalid". This may be acceptable with tasks that take less than two hours on a low end card, but might mean a big waste of crunching power at higher n.
The positive side is that most "errors while computing" seem to happen at the beginning (looking at reported times) and are mush more frequent than invalid tasks (by other causes). My sample is small though. Not sure if it represents the whole Genefer-Boin universe.
This is what I see looking at my wingmen:
Lots of "aborted by user". This may be due to DCF problems. Most people probably did not bother to use my procedure for fixing the DCF, so even though the estimated GFLOPS was corrected, it will take a while for the DCF to drop to a reasonable value.
There's a LOT of errored WUs. There's a lot of causes; some of which I can guess at, some I can't. There's a heck of a lot of ways to make a GPU task go bad. Most of them cause the WU to fail at initialization, or shortly thereafter. That's not so bad.
There's a lot of people who are getting what appear to be overclocking related errors. Most people overclock their GPUs. We're quickly learning that the hardware circuits, either the FPU or video ram, is highly susceptible to errors when overclocked.
So lots of people are going to get errors until they lower the clocks on the GPU. Then they'll get FEWER errors. Fewer errors isn't so bad on 90 minute WUs, but it's going to be a killer with 200 hour workunits.
My recomendation therefore is, was, and will probably always be to run GPUs at stock clocks when running Genefer. What's the point of running 10% faster if you never complete a WU?
Experience on the 550 Ti seems to indicate that it's the memory clock that's most important in reducing the error rate. There's not a whole lot of data to back that up, however, and it's only a theory at this point.
____________
My lucky number is 75898524288+1 |
|
|
|
I have alot of errored units. It doesn't show my which card ran the unit, but I suspect it's the 8800 and not the 460 that is having the errors. I don't recall any errors before adding in the 8800 for testing it out since I got it used. It's not only these units I'm seeing errors on, the gpu tasks for MilkyWay also have been erroring out since adding the 8800.
Is there a way to set the units to check if the card is compatible before trying to run it? It won't be an issue for me much longer, waiting on a few more pieces to come in for a new system and the 8800 is going in that, but for those who are running more than one card with a possible incompatible card....
____________
771*2^1354880+1 is prime |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
I have alot of errored units. It doesn't show my which card ran the unit, but I suspect it's the 8800 and not the 460 that is having the errors. I don't recall any errors before adding in the 8800 for testing it out since I got it used. It's not only these units I'm seeing errors on, the gpu tasks for MilkyWay also have been erroring out since adding the 8800.
Is there a way to set the units to check if the card is compatible before trying to run it? It won't be an issue for me much longer, waiting on a few more pieces to come in for a new system and the 8800 is going in that, but for those who are running more than one card with a possible incompatible card....
V1.06 was intended to diagnose your exact problem.
What happens is that A) the WUs are configured on the server to only be sent to computers with an appropriate (Compute Capability 1.3 or higher) GPU, and B) GeneferCUDA checks that it's running on an appropriate GPU.
What seems to be happening is that the server correctly sends you the WUs, but the Boinc client doesn't seem to be smart enough to understand that they can only run on one of your two GPUs. Once GeneferCUDA starts running, it immediately detects that it's running on the wrong GPU and aborts.
If you look at the result page for the failed WUs, you'll see this in the stderr output:
Priority change succeeded.
GPU=GeForce 8800 GT
Global memory=511377408 Shared memory/block=16384 Registers/block=8192 Warp size=32
Max threads/block=512
Max thread dim=512 512 64
Max grid=65535 65535 1
CC=1.1
Clock=1500 MHz
# of MP=14
A GPU with compute capability >= 1.3 is required for this program.
If you are running a boinc client 6.13.x or higher, you can use this construct in cc_config.xml to tell the client not to use the 8800 for geneferCUDA:
<exclude_gpu>
Don't use the given GPU for the given project. If <device_num> is not specified, exclude all GPUs of the given type. <type> is required if your computer has more than one type of GPU; otherwise it can be omitted. <app> specifies the short name of an application (i.e. the <name> element within the <app> element in client_state.xml). If specified, only tasks for that app are excluded. You may include multiple <exclude_gpu> elements. New in 6.13
<exclude_gpu>
<url>project_URL</url>
[<device_num>N</device_num>]
[<type>nvidia|ati</type>]
[<app>appname</app>]
</exclude_gpu>
For example:
<cc_config>
<log_flags>
[...]
</log_flags>
<options>
[...]
<exclude_gpu>
<url]http://www.primegrid.com/</url>
<device_num>1</device_num>
<type>nvidia</type>
<app>genefer</app>
</exclude_gpu>
<exclude_gpu>
<url]http://www.primegrid.com/</url>
<device_num>1</device_num>
<type>nvidia</type>
<app>genefer_wr</app>
</exclude_gpu>
[...]
</options>
</cc_config>
If you're running and old version of BOINC (below 6.13.xx), then you can't tell the client not to use the 8800 just for GeneferCUDA, but you can tell it not to use the 8800 for all BOINC projects:
<ignore_cuda_dev>1</ignore_cuda_dev>
There might also be a way of managing this with app_info, but I'm not sure of how.
____________
My lucky number is 75898524288+1 |
|
|
|
I'll try it out when I get a chance, right now, I'm just going with having another gpu application sending work, since the 460 is faster than the 8800, it picks up these before the 8800 has a chance, since the 8800 runs the longer GPUGrid tasks in about 24-30 hours....
Its not perfect, but it ensures the 8800 isn't always running and erroring a Genefer task....
And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....
____________
771*2^1354880+1 is prime |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
I'll try it out when I get a chance, right now, I'm just going with having another gpu application sending work, since the 460 is faster than the 8800, it picks up these before the 8800 has a chance, since the 8800 runs the longer GPUGrid tasks in about 24-30 hours....
Its not perfect, but it ensures the 8800 isn't always running and erroring a Genefer task....
And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....
You know, based on what everyone's saying I'm thinking it might be more of a problem with the temperature than with the clock.
It might be simple as just pegging the fan at 100%.
____________
My lucky number is 75898524288+1 |
|
|
Genn Volunteer tester
 Send message
Joined: 16 Jul 09 Posts: 50 ID: 43504 Credit: 91,204,289 RAC: 0
                     
|
are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?
The project's still in beta. Slowly, they've been adding in all the webserver parts:
- The badges are in.
- The stat export is in, so GFN shows up on the free-DC sub-project stats.
- The application page lists the Genefer applications.
- EVERYTHING necessary to make GFN actually work! :)
I think this is the only remaining part to go, and it will be done.
Rytis added GFN stats to account page. :)
I just realized I forgot one (make that two, um, three...) item(s) from my list:
The first two I forgot because those pages are currently disabled for the challenge. They may, in fact, already be done.
The third is something out of sight, and it may probably already done.
Also 'Pending credits' page doesn't support GFN. |
|
|
|
do we have any GFN WU available now ?? i was crunching all night and now BM dont want get new WU ;/
____________
|
|
|
|
are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?
The project's still in beta. Slowly, they've been adding in all the webserver parts:
- The badges are in.
- The stat export is in, so GFN shows up on the free-DC sub-project stats.
- The application page lists the Genefer applications.
- EVERYTHING necessary to make GFN actually work! :)
I think this is the only remaining part to go, and it will be done.
Rytis added GFN stats to account page. :)
I just realized I forgot one (make that two, um, three...) item(s) from my list:
The first two I forgot because those pages are currently disabled for the challenge. They may, in fact, already be done.
The third is something out of sight, and it may probably already done.
Also 'Pending credits' page doesn't support GFN.
Yes it does. Just the amount of credit that's pending is borked. But that's always the case with work units for which you always get the same amount of credits.
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
|
do we have any GFN WU available now ?? i was crunching all night and now BM dont want get new WU ;/
STE\/E eat up all |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....
I've also got a 460, but I run it at stock (1350). 1730 is... a lot faster!
Last night I clocked my card to 1630. Everything ran hotter, and faster. That cut 15 to 20 minutes off the WU.
With the big WUs, overclocking like that will mean cutting a day or more off the run time. Very tempting -- but I probably still won't do it and opt for an extra margin of stability.
____________
My lucky number is 75898524288+1 |
|
|
Genn Volunteer tester
 Send message
Joined: 16 Jul 09 Posts: 50 ID: 43504 Credit: 91,204,289 RAC: 0
                     
|
are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?
The project's still in beta. Slowly, they've been adding in all the webserver parts:
- The badges are in.
- The stat export is in, so GFN shows up on the free-DC sub-project stats.
- The application page lists the Genefer applications.
- EVERYTHING necessary to make GFN actually work! :)
I think this is the only remaining part to go, and it will be done.
Rytis added GFN stats to account page. :)
I just realized I forgot one (make that two, um, three...) item(s) from my list:
The first two I forgot because those pages are currently disabled for the challenge. They may, in fact, already be done.
The third is something out of sight, and it may probably already done.
Also 'Pending credits' page doesn't support GFN.
Yes it does. Just the amount of credit that's pending is borked. But that's always the case with work units for which you always get the same amount of credits.
You're right, thanks for pointing out. Maybe claimed credit for GPU workunits isn't handled correctly by boinc. |
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2408 ID: 1178 Credit: 19,577,748,615 RAC: 11,711,407
                                                
|
And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....
I've also got a 460, but I run it at stock (1350). 1730 is... a lot faster!
Last night I clocked my card to 1630. Everything ran hotter, and faster. That cut 15 to 20 minutes off the WU.
With the big WUs, overclocking like that will mean cutting a day or more off the run time. Very tempting -- but I probably still won't do it and opt for an extra margin of stability.
1730 is the max stable clock on my ASUS TOP 460 card...gain is almost exactly 20 minutes per unit. Heat doesn't exceed 68C. Above 1730 ( really 1728 in actual clock), I get immediate errors, which is likely due to actual hardware issues (clock timings, etc) since heat doesn't go up much per clock bump in my case.
____________
141941*2^4299438-1 is prime!
|
|
|
|
GTX470 on 1544MHz crunch 27 WU/day, but on 1512MHz - 26 WU/day. |
|
|
|
And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....
I've also got a 460, but I run it at stock (1350). 1730 is... a lot faster!
Last night I clocked my card to 1630. Everything ran hotter, and faster. That cut 15 to 20 minutes off the WU.
With the big WUs, overclocking like that will mean cutting a day or more off the run time. Very tempting -- but I probably still won't do it and opt for an extra margin of stability.
1730 is the max stable clock on my ASUS TOP 460 card...gain is almost exactly 20 minutes per unit. Heat doesn't exceed 68C. Above 1730 ( really 1728 in actual clock), I get immediate errors, which is likely due to actual hardware issues (clock timings, etc) since heat doesn't go up much per clock bump in my case.
I actually lowered it because of heat issues during the challenge, I had it stable at 900/1800/2000 with voltage at 1.025v but the fan would go back and forth between 80% and 100%. Haven't tested it with those settings on Genefer, but its at 875/1750/1950 stable. It's in the range where gpu usage isn't a constant 99%, but it crunches without issues and no screen lag, but I go higher and I have to increase the voltage, and I really don't want to do that with both cards in as it seems like it sets the voltage for both and the 8800 isn't oc'ed. I've been alittle iffy on running the fan at 100% for too long even though it was more likely coincidence that setting my 295 to 100% fried it 5 min later....
____________
771*2^1354880+1 is prime |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Any chance for an OSX CUDA app?
I don't think anyone ever answered your question.
The answer is technically yes, since the OSX app is very similar to the Linux app.
However, my understanding is that there's very few compatible Apple computers out there, which is why this is a low priority. Those computers that ship with Nvidia GPUs generally only ship with single precision GPUs.
Looking at your computers, you have one computer running Darwin with 2 8800 GT GPUs (those are single precision), another with a GT 120 GPU (also single precision), one that has a GT 130 (single precision), and some more that don't have GPUs.
If you have an OSX computer that has a compatible GPU, there would be more incentive to make a build for OSX.
____________
My lucky number is 75898524288+1 |
|
|
|
Thanks for the response. At the time I asked, I was considering upgrading my Mac Pro to a GTX 285. But I have since learned that my 2006 vintage does not support it. Ah well.
____________
Reno, NV
|
|
|
|
I know it's still beta but I was noticing that there are many lower end to middle speed cards testing but only 2 570s and one 580 and no 590s unless I missed someone in the threads. All the fastest cards are still on PPS_Sieves and I would think you would want more top cards beta testing so a better incentive might be needed. The current credit is not drawing enough of the big guns in as of now, especially someone with a GTX 590 (or a pair) as there may be unknown issues with those dual chip cards. My example of the different credit for the current n=262144 was merely a suggestion and I know as n increases so will the credit but I was just thinking that getting a baseline ''credits per minute/second/GFlops'' number that will be the production number determined 'now', that then can be used to extrapolate on bigger n's. By doing that now you may draw more fast cards to beta testing. This is all assuming that more fast cards are wanted/needed for testing and eventually longer termed crunching through a higher credit incentive. And as Mike said in another thread that doubling n doesn't translate into doubling WU time. That it's more than double so just doubling 3600 or whatever number when testing goes to 524288 won't work. Any other suggestions that may be helpful? Or is this all a cart before the horse thing?
The question of how credit is determined to begin with, was a serious but curious one. Although, if Pandora's box is how it's done than John or Rytis (or both) must know Pandora pretty well, what with the different sub projects and all. ;)
NeoMetal*
I have a GTX590 and just crunched a couple of WU's.
The card is stock and the runs times were 3285 seconds.
Are these times ok?... The block size is set to zero.
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
I have a GTX590 and just crunched a couple of WU's.
The card is stock and the runs times were 3285 seconds.
Are these times ok?... The block size is set to zero.
Sounds about right.
____________
My lucky number is 75898524288+1 |
|
|
|
I've added my EVGA GTX 465, moderately overclocked, to the effort. It appears to be stable at ~4150sec/wu. I use EVGA Precision to keep the card at <=70degC.
____________
|
|
|
|
Thanks for the response. At the time I asked, I was considering upgrading my Mac Pro to a GTX 285. But I have since learned that my 2006 vintage does not support it. Ah well.
I have the same mac pro, 2006 model. Do you know of any Nvidia DP card for this model?
____________
|
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2408 ID: 1178 Credit: 19,577,748,615 RAC: 11,711,407
                                                
|
Thanks for the response. At the time I asked, I was considering upgrading my Mac Pro to a GTX 285. But I have since learned that my 2006 vintage does not support it. Ah well.
I have the same mac pro, 2006 model. Do you know of any Nvidia DP card for this model?
I do not know if your specific model will handle it (power supply issues, etc.), but a Quadro FX 4800 was made for the Mac, it is DP capable (basically the same thing as a GTX 260, and is a fairly big card so it needs a tower case.
____________
141941*2^4299438-1 is prime!
|
|
|
|
Thanks for the response. At the time I asked, I was considering upgrading my Mac Pro to a GTX 285. But I have since learned that my 2006 vintage does not support it. Ah well.
I have the same mac pro, 2006 model. Do you know of any Nvidia DP card for this model?
I do not know if your specific model will handle it (power supply issues, etc.), but a Quadro FX 4800 was made for the Mac, it is DP capable (basically the same thing as a GTX 260, and is a fairly big card so it needs a tower case.
Holy smokes batman! That's a thousand dollar card!
____________
|
|
|
|
Re: Quadro FX 4800 for mac pro:
http://www.nvidia.com/object/product_quadro_fx_4800_for_mac_us.html
Note: MacPro1,1 and MacPro2,1 are not compatible.
I believe the 2006 Mac Pro is the 1,1 or 2,1. Mine is a 2,1.
Edit: The issue is that the mac version of the GTX 285 and Quadro FX 4800 require EFI64. The first gen (1,1 & 2,1) Mac Pro are EFI32.
http://en.wikipedia.org/wiki/Mac_Pro#Specifications
____________
Reno, NV
|
|
|
|
Does the due date have to be so short (~24 hours)? If your queue size is set to anything greater than 1 day, it over-downloads. (Yes, I already checked my DCF, it is currently 1.1.)
Sorry, this is still beta testing for now. Once the project goes live, the deadline will be adjusted accordingly.
Now that we are out of the test phase, how about increasing the deadline from 24 hours?
____________
Reno, NV
|
|
|
STE\/E Volunteer tester
 Send message
Joined: 10 Aug 05 Posts: 573 ID: 103 Credit: 3,664,924,982 RAC: 81,589
                     
|
Yes, I had/have my Preferences set to 0.5 Days & the CUDA GFN Wu's still want to run in Priority Mode, some of my Box's will have as many as 5 or 6 of the GFN Wu's in some stage of Completion even though there's only 1 or 2 GPU's in the Box. Was watching one Box yesterday and 2 or 3 of the Wu's finished a few Minutes over their Deadline because other later deadline Wu's ran first ...
____________
|
|
|
|
Yes, I had/have my Preferences set to 0.5 Days & the CUDA GFN Wu's still want to run in Priority Mode, some of my Box's will have as many as 5 or 6 of the GFN Wu's in some stage of Completion even though there's only 1 or 2 GPU's in the Box. Was watching one Box yesterday and 2 or 3 of the Wu's finished a few Minutes over their Deadline because other later deadline Wu's ran first ...
This problem is still happening if you are crunching PPS_LLRs, while running GFN WUs, because they are way off themselves with their GFlops estimates. The PPS_LLRs will raise your DCF to around 9 which will screw with the GFN WUs (and most others as well), which have been corrected to near 1.0 now. As a PPS finishes they will raise GFN to 9x what they are supposed to be throwing BOINC into high priority mode, IF you have more than 2-3 GFN WUs waiting to run. They keep changing the ranges of the PPS WUs (sometimes running more than 1 range at a time) and keep forgetting to set the proper GFlops. I know they have a lot going on all the time BUT this shouldn't happen CONSTANTLY! Aaaahh, we can dream of a day when ALL sub projects DCF's are at (or near) >>>[[1.0]]<<< so we can end all this BOINCing crazy Hi-P.
My $0.02
NeoMetal*
____________
Largest Primes to Date:
As Double Checker: SR5 109208*5^1816285+1 Dgts-1,269,534
As Initial Finder: SR5 243944*5^1258576-1 Dgts-879,713
|
|
|
STE\/E Volunteer tester
 Send message
Joined: 10 Aug 05 Posts: 573 ID: 103 Credit: 3,664,924,982 RAC: 81,589
                     
|
I haven't been running any other PrimeGrid Wu's since the Last Challenge ended ...
____________
|
|
|
|
I haven't been running any other PrimeGrid Wu's since the Last Challenge ended ...
Your DCF number goes up instantly but goes down very slowly, so you may be not have crunched enough other WUs including GFNs. To get a DCF down from 9 to 1 will take 20-30 proper GFlop WUs. If you have gone through more than that, AND not crunched even 1 PPS, then something else is happening that I can't think of at the moment to cause Hi-P.
NeoMetal*
____________
Largest Primes to Date:
As Double Checker: SR5 109208*5^1816285+1 Dgts-1,269,534
As Initial Finder: SR5 243944*5^1258576-1 Dgts-879,713
|
|
|
STE\/E Volunteer tester
 Send message
Joined: 10 Aug 05 Posts: 573 ID: 103 Credit: 3,664,924,982 RAC: 81,589
                     
|
I haven't been running any other PrimeGrid Wu's since the Last Challenge ended ...
so you may be not have crunched enough other WUs including GFNs.
NeoMetal*
lol, yeah I need to crunch more ... ;) ... I've been running the GNF's for 4-5 Days now at least, the Wu's take under an Hour each so I think I've done at least 30 on each box by now ... I've also been running WCG & the SIMAP Challenge non-stop since the PG Challenge ended so I think I've run enough other Wu's too ... :)
____________
|
|
|
|
Regardless of a DCF problem, 24 hours deadline is way too short. Anyone who keeps even 2 days of queue will be constantly over-downloading and late to return results. And it won't play nice with other cuda projects.
____________
Reno, NV
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Does the due date have to be so short (~24 hours)? If your queue size is set to anything greater than 1 day, it over-downloads. (Yes, I already checked my DCF, it is currently 1.1.)
Sorry, this is still beta testing for now. Once the project goes live, the deadline will be adjusted accordingly.
Now that we are out of the test phase, how about increasing the deadline from 24 hours?
Yes, an oversight on our part. It has been updated now to 3 days.
____________
|
|
|
STE\/E Volunteer tester
 Send message
Joined: 10 Aug 05 Posts: 573 ID: 103 Credit: 3,664,924,982 RAC: 81,589
                     
|
Thanks John ... :) ... Looks like only 2 Days though on new ones I downloaded, better than 1 Day though ... Still running 9n High Priority Mode too even with a .5 Preference for work ...
____________
|
|
|
|
Yes, an oversight on our part. It has been updated now to 3 days.
Thanks!
____________
Reno, NV
|
|
|
|
AS of UTC time 2012-02-07 05:30:45 (local 23:30)
I am trying to get Genfer to limit WU's to be processed before
local time 2012-02-07 23:30:45 from 53 WU's (avg of 2.5 hrs per WU elapsed time)
Have boinc setting set to 0 days additional, and still get slammed.
Also processing llr-TRP's - no overload on them
Thought the thread said Genfer now had 3 day deadlines.
Isn't so in Oklahoma anyway.
____________
|
|
|
STE\/E Volunteer tester
 Send message
Joined: 10 Aug 05 Posts: 573 ID: 103 Credit: 3,664,924,982 RAC: 81,589
                     
|
AS of UTC time 2012-02-07 05:30:45 (local 23:30)
I am trying to get Genfer to limit WU's to be processed before
local time 2012-02-07 23:30:45 from 53 WU's (avg of 2.5 hrs per WU elapsed time)
Have boinc setting set to 0 days additional, and still get slammed.
Also processing llr-TRP's - no overload on them
Thought the thread said Genfer now had 3 day deadlines.
Isn't so in Oklahoma anyway.
Same here, we may have to work thru the Wu's that were in the Pipeline already before the Deadline was increased before we start seeing Wu's with a 3 Day Deadline, don't know about that fer sure though ...
____________
|
|
|
|
I'm the same... and still getting wus with a 24 hour deadline... just hope the admins allow us 72 hours for them!
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Same here, we may have to work thru the Wu's that were in the Pipeline already before the Deadline was increased before we start seeing Wu's with a 3 Day Deadline, don't know about that fer sure though ...
This is correct. I should have clarified that the 3 day deadline was only for new tasks there weren't already in the buffer. Once those clear out, you'll see 3 days. Apologies for the delay.
____________
|
|
|
|
Are you allowing 3 days for those of us that got 3 days worth of 1 day wus?
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Are you allowing 3 days for those of us that got 3 days worth of 1 day wus?
I don't think they can do that, but I wouldn't mind to be proven wrong! The best thing to do is to abort enough WUs so that the remaining ones aren't at risk of missing the deadline. There's not much downside to aborting WUs that haven't started yet; they just get sent out again to other computers.
It's good Boinc etiquette to abort WUs you can't finish on time: it's better for the project than letting them miss the deadline, and it's better for your wingmen, too.
____________
My lucky number is 75898524288+1 |
|
|
Neo Volunteer tester
 Send message
Joined: 28 Oct 10 Posts: 710 ID: 71509 Credit: 91,178,992 RAC: 0
                   
|
It's good Boinc etiquette to abort WUs you can't finish on time: it's better for the project than letting them miss the deadline, and it's better for your wingmen, too.
+1
Neo
AtP |
|
|
|
Are you allowing 3 days for those of us that got 3 days worth of 1 day wus?
I don't think they can do that, but I wouldn't mind to be proven wrong! The best thing to do is to abort enough WUs so that the remaining ones aren't at risk of missing the deadline. There's not much downside to aborting WUs that haven't started yet; they just get sent out again to other computers.
It's good Boinc etiquette to abort WUs you can't finish on time: it's better for the project than letting them miss the deadline, and it's better for your wingmen, too.
that's why I asked... have already aborted quite a few that were about to go out of date... but I have known projects that have extended deadlines in the past when something like downtime intervened...
____________
|
|
|
|
Random question -
I Boinc going to show primes found (similar to PPS llr's, etc) sometime in the future
and will the WU's after completion and validation show what number was or wasn't a prime that we were working on
[/img]
____________
|
|
|
|
Following John Bode's question:
I think two GFN PRPs have been found during the beta phase. Are they still being tested for primality or did I miss the announcement? |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Following John Bode's question:
I think two GFN PRPs have been found during the beta phase. Are they still being tested for primality or did I miss the announcement?
It was just one prime so far. Also, these WUs don't take that long to test for primality, perhaps 15 hours on my Core2.
More info on that prime can be found here: Generalized Fermat Mega Prime
____________
My lucky number is 75898524288+1 |
|
|
|
Following John Bode's question:
I think two GFN PRPs have been found during the beta phase. Are they still being tested for primality or did I miss the announcement?
It was just one prime so far. Also, these WUs don't take that long to test for primality, perhaps 15 hours on my Core2.
More info on that prime can be found here: Generalized Fermat Mega Prime
I missed that post.
Free-DC shows 4 hits (http://stats.free-dc.org/stats.php?page=userwork&proj=pgrid&subproj=Generalized%20Fermat%20Prime%20Search&sort=hits). I assumed there were two primes found (hits referring to finder and doublechecker). Maybe the other one was not confirmed prime.
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Free-DC shows 4 hits (http://stats.free-dc.org/stats.php?page=userwork&proj=pgrid&subproj=Generalized%20Fermat%20Prime%20Search&sort=hits). I assumed there were two primes found (hits referring to finder and doublechecker).
I suspect that yes, those hits count for everyone who finds the prime. There could have been a Prime found recently that wasn't announced yet, or Free-DC could have recorded everyone who "found" that first prime before the validator was fixed. That was at least 3 people, and might have been 4. That could also be the reason for Free-DC showing 4 hits.
Maybe the other one was not confirmed prime.
Definitely not.
The way I understand it, PRP in Genefer is the same as "proof" in other programs. Apparently Yves Gallot was rather strenuous in what he considered "proof". With the exception of a known condition where small powers of two yield false positives, I doubt anyone will ever see a Genefer PRP that is composite. The odds of a "PRP" from that size GFN being composite are approximately 1:1000000....0000000 (that last part has 3 million zeros in it). The odds of winning the lottery a thousand times in a row are much more likely than one of these PRPs being composite. :)
____________
My lucky number is 75898524288+1 |
|
|
rogueVolunteer developer
 Send message
Joined: 8 Sep 07 Posts: 1257 ID: 12001 Credit: 18,565,548 RAC: 0
 
|
The way I understand it, PRP in Genefer is the same as "proof" in other programs. Apparently Yves Gallot was rather strenuous in what he considered "proof". With the exception of a known condition where small powers of two yield false positives, I doubt anyone will ever see a Genefer PRP that is composite. The odds of a "PRP" from that size GFN being composite are approximately 1:1000000....0000000 (that last part has 3 million zeros in it). The odds of winning the lottery a thousand times in a row are much more likely than one of these PRPs being composite. :)
Unfortunately, the math to show that his test proves primality doesn't exist.
Maybe someone here should work on that... |
|
|
axnVolunteer developer Send message
Joined: 29 Dec 07 Posts: 285 ID: 16874 Credit: 28,027,106 RAC: 0
            
|
Unfortunately, the math to show that his test proves primality doesn't exist.
Maybe someone here should work on that...
GeneFer does a base-2 prp test (not even an SPRP test). It doesn't prove primality. Of course PRP's are exceedingly rare, especially at large sizes. But you still have to prove them.
Which brings me to a thought I have been having. It is possible for GeneFer code to be adapted to help with the primality proving. By using different base(s) for the PRP testing, it is possible to implement a rigorous primality test. |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
GenefX64 has now been released for 64 bit Linux. The GFN524288 search is now open to hosts with 64 bit Linux.
Run times on an Intel Core i5 @ 2.8 GHz with 4G RAM is ~36 hours.
____________
|
|
|
Crun-chi Volunteer tester
 Send message
Joined: 25 Nov 09 Posts: 3242 ID: 50683 Credit: 151,735,680 RAC: 563
                         
|
GenefX64 has now been released for 64 bit Linux. The GFN524288 search is now open to hosts with 64 bit Linux.
Run times on an Intel Core i5 @ 2.8 GHz with 4G RAM is ~36 hours.
Then you can fix "prediction" time to more real one :)
Recent average CPU time: 59:29
____________
92*10^1585996-1 NEAR-REPDIGIT PRIME :) :) :)
4 * 650^498101-1 CRUS PRIME
2022202116^131072+1 GENERALIZED FERMAT
Proud member of team Aggie The Pew. Go Aggie! |
|
|
|
GenefX64 has now been released for 64 bit Linux. The GFN524288 search is now open to hosts with 64 bit Linux.
Run times on an Intel Core i5 @ 2.8 GHz with 4G RAM is ~36 hours.
Any plans for a windows version? |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
GenefX64 has now been released for 64 bit Linux. The GFN524288 search is now open to hosts with 64 bit Linux.
Run times on an Intel Core i5 @ 2.8 GHz with 4G RAM is ~36 hours.
Any plans for a windows version?
You mean like this:
C:\GeneferCUDA test\Genefer SVN bin\x64>genefx64_windows_boinc.exe -b
genefx64 2.3.0-0 (Windows x86_64 SSE2)
Copyright 2001-2003, Yves Gallot
Copyright 2009, Mark Rodenkirch, David Underbakke
Copyright 2010-2012, Shoichiro Yamada, Ken Brazier
Copyright 2011-2012, Iain Bethune, Michael Goetz
Command line: genefx64_windows_boinc.exe -b
Generalized Fermat Number Bench
5683936^256+1 Time: 4 us/mul. Err: 0.2500 1730 digits
4616790^512+1 Time: 9 us/mul. Err: 0.2188 3413 digits
3750000^1024+1 Time: 22 us/mul. Err: 0.2188 6732 digits
3045946^2048+1 Time: 39 us/mul. Err: 0.1953 13279 digits
2474076^4096+1 Time: 84 us/mul. Err: 0.2188 26188 digits
2009574^8192+1 Time: 208 us/mul. Err: 0.2813 51636 digits
1632282^16384+1 Time: 448 us/mul. Err: 0.2500 101791 digits
1325824^32768+1 Time: 943 us/mul. Err: 0.2188 200622 digits
1076904^65536+1 Time: 2 ms/mul. Err: 0.2500 395325 digits
874718^131072+1 Time: 4.22 ms/mul. Err: 0.2188 778813 digits
710492^262144+1 Time: 9.24 ms/mul. Err: 0.2188 1533952 digits
577098^524288+1 Time: 20.3 ms/mul. Err: 0.2188 3020555 digits
Iain can probably build an up to date version if there's a desire for it and the admins want to provide it. If they gave the go ahead for a Linux version, I don't see why they wouldn't want a Windows version.
(I don't currently have an environment where I can build the assembler parts of the x64 and x87 CPU versions on my Windows box, but Iain has it set up in a Windows VM on his Mac. Yeah, I know. How ironic is that?)
Here's something interesting:
C:\GeneferCUDA test\Genefer SVN bin\x64>genefx64_windows_boinc.exe -q "1234^8192+1"
genefx64 2.3.0-0 (Windows x86_64 SSE2)
Copyright 2001-2003, Yves Gallot
Copyright 2009, Mark Rodenkirch, David Underbakke
Copyright 2010-2012, Shoichiro Yamada, Ken Brazier
Copyright 2011-2012, Iain Bethune, Michael Goetz
Command line: genefx64_windows_boinc.exe -q 1234^8192+1
Testing 1234^8192+1...
maxErr during b^N initialization = 0.0000 (0.006 seconds).
1234^8192+1 is composite. (RES=1e6ddba31fcbf2d6) (25325 digits) (err = 0.0000) (time = 0:00:18) 14:19:13
and
C:\GeneferCUDA test\Genefer SVN\genefercuda>genefercuda-boinc-windows.exe -q "1234^8192+1"
genefercuda 2.3.0-0 (Windows x86 CUDA 3.2)
Copyright 2001-2003, Yves Gallot
Copyright 2009, Mark Rodenkirch, David Underbakke
Copyright 2010-2012, Shoichiro Yamada, Ken Brazier
Copyright 2011-2012, Iain Bethune, Michael Goetz
Command line: genefercuda-boinc-windows.exe -q 1234^8192+1
Using default SHIFT value=6
The checkpoint doesn't match current test. Current test will be restarted
Starting initialization...
maxErr during b^N initialization = 0.0000 (0.023 seconds).
Estimated total run time for 1234^8192+1 is 0:00:13
1234^8192+1 is composite. (RES=1e6ddba31fcbf2d6) (25325 digits) (err = 0.0000) (time = 0:00:14) 09:34:31
Because of the inefficiency of working with such "small" numbers (the chunks of work being given to the GPU are so small that more time is being spent preparing the work for the GPU than actually doing the computations on the GPU), GenefX64 is almost as fast as GeneferCUDA.
____________
My lucky number is 75898524288+1 |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
GenefX64 has now been released for 64 bit Linux. The GFN524288 search is now open to hosts with 64 bit Linux.
Run times on an Intel Core i5 @ 2.8 GHz with 4G RAM is ~36 hours.
Any plans for a windows version?
BOINC development is ongoing for the following MacIntel, Windows, and Linux applications:
- GeneferCUDA (recommended for N>=262144)
- GenefX64 (recommended for 131072<=N<=1048576)
- Genefer (recommended for N=131072)
- Genefer80 (available for N=131072; required for N<=65536)
All at some point can become available in BOINC. However, with only two GFN projects in BOINC now, the two applications you'll see are GeneferCUDA and GenefX64.
NOTE: even Genefer80 has a max b limit. To complete testing up to a desired b level, pfgw will have to be used. Each application is significantly slower than the previous so as b increases, it quickly becomes increasingly inefficient to search for GFN primes. The only practical path forward is better hardware and/or better software.
____________
|
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2408 ID: 1178 Credit: 19,577,748,615 RAC: 11,711,407
                                                
|
*Genefer80 (recommended for N<=131072)
Since we are already passed the b limits for these smaller N for the other apps, required rather than recommended is more correct here.
____________
141941*2^4299438-1 is prime!
|
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1962 ID: 352 Credit: 6,336,802,620 RAC: 2,351,903
                                      
|
Because of the inefficiency of working with such "small" numbers (the chunks of work being given to the GPU are so small that more time is being spent preparing the work for the GPU than actually doing the computations on the GPU), GenefX64 is almost as fast as GeneferCUDA.
Yeah, a known characteristic.
Would it be possible to share some parts and do several small tests on GPU at once?
There might be a common part (preparation of data or whatever) when N is the same for each test.
Or it is totally crazy idea?
____________
My stats |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Because of the inefficiency of working with such "small" numbers (the chunks of work being given to the GPU are so small that more time is being spent preparing the work for the GPU than actually doing the computations on the GPU), GenefX64 is almost as fast as GeneferCUDA.
Yeah, a known characteristic.
Would it be possible to share some parts and do several small tests on GPU at once?
There might be a common part (preparation of data or whatever) when N is the same for each test.
Or it is totally crazy idea?
Feel free to grab the source code and do whatever you want to it. :)
I won't, for one REALLY good reason: GeneferCUDA is useless at that N because b would be too high for it to process. There's absolutely zero benefit to making the program run faster at low N when there's no possibility of it being used. Plus the numbers at that N are pretty small by today's standards and wouldn't be anywhere close to getting on the top 500 list.
If anyone wants to put that kind of effort into improving software, I can think of a lot better places to expend that effort!
____________
My lucky number is 75898524288+1 |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
*Genefer80 (recommended for N<=131072)
Since we are already passed the b limits for these smaller N for the other apps, required rather than recommended is more correct here.
Almost, I guess it should say:
*Genefer80 (available for N=131072; required for N<=65536)
updated post.
NOTE: N=131072 is still being searched independently outside of PG.
____________
|
|
|
|
Interesting little glitch. My modest nVidia GTS 450 has been running a Genefer World Record WU for quite a while. I had to suspend the computation due to a really nasty thunderstorm. When I restarted the computation the time to completion went from approximately 60 hours to over 200 hours. Well I'm now over the "end date". I'm letting it run to the end.
I noticed there were 4 other "wingmen" with computation errors on this WU. I am assuming that if mine computes properly (before the next "wingman" completes the WU I will get some credit?
My other 2 rigs have very high end ATI/AMD graphics cards, shame you can't utilize them on this part of PrimeGrid. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Interesting little glitch. My modest nVidia GTS 450 has been running a Genefer World Record WU for quite a while. I had to suspend the computation due to a really nasty thunderstorm. When I restarted the computation the time to completion went from approximately 60 hours to over 200 hours. Well I'm now over the "end date". I'm letting it run to the end.
I noticed there were 4 other "wingmen" with computation errors on this WU. I am assuming that if mine computes properly (before the next "wingman" completes the WU I will get some credit?
My other 2 rigs have very high end ATI/AMD graphics cards, shame you can't utilize them on this part of PrimeGrid.
First of all, that WUs deadline is TOO SHORT. It's 5 days, and that's just not enough. So it's definitely not your fault. Current WUs (which run about twice as long) have a deadline of 3 weeks. So you should be fine on the next WU.
I'm guessing the total run time for your card will be around 150 hours, so there's no way you could have made the deadline anyway. Yes, you should keep crunching -- first of all, with the high error rate right now, there's a decent chance you'll return the WU before any other wingman does But even if they do return it before you, as long as you return yours before the WU gets purged from the database you will still get full credit as long as your result is correct.
I'll let the admins know about this.
____________
My lucky number is 75898524288+1 |
|
|
|
Thanks Michael. The WU is about 12 hours from completing so total run time is going to be close to 165 hours! I will run it to completion. |
|
|
|
My Genefer WR work unit completed successfully, after 168 hours or so, with credit awarded. Glad it worked out OK |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Awesome!
____________
My lucky number is 75898524288+1 |
|
|
|
Preferences page should be updated to include linux cpu. It is running quite well: less than 24 hours with a 2500k on a vm, which is not that bad when compared to some slow gpus, if you consider that the latter runs a single wu and a cpu can run as many as the cores within it.
Are there any news on the genefer64 for windows on boinc?
____________
676754^262144+1 is prime |
|
|
|
The build is done, we're just in the process of implementing the new builds on BOINC.
____________
Twitter: IainBethune
Proud member of team "Aggie The Pew". Go Aggie!
3073428256125*2^1290000-1 is Prime! |
|
|
|
Running my first Genefer on my new GTX 560Ti installed only a few hours ago. It's the first time I have run a GPU, so kind of disappointed the first WU has taken over 4 hours already. Was under the impression that using a GPU would cut the run times down dramatically.
The WU is currently about 56% complete so it looks like it will take another 3 hours plus to finish.
Is there any information on how to optimise GPU's for running PrimeGrid in general?
Settings for the Generfer are CUDA Short WU's.
Am I missing something especially after reading earlier in this thread these WU's only taking about 3994 seconds to complete.
Look forward to your replies.
Kind regards
The Knighty NI
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
|
Hi The Knighty NI,
The crunchtime has been cut down significantly! Using a modern CPU it would take somewhere been 30 and 40 hours to finish such a unit if I'm not mistaking. :)
To respond to your question about earlier units, yes you are missing something. They were much smaller and therefore of course took a lot less time to check.
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
|
Hi Pyrus
Thanks for the heads up.
Definitely missed the fact that the WU's are bigger. :)
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
|
I am using i7-920 (2.67G) and i7-930 (2.8G) CPU and they are taking between 38-50 hours. So your 8-9 hours on these are very reasonable for the GPU.
I am itching for a new box. I7 with AVX and a decent NVidia GPU is my goal. Just don't have the $ to spend currently. |
|
|
|
Just a thought regarding the new GPU.
Seems the GPU loses focus and starts to wander,hibernate or whatever. Its noticeable that the temp drops from a reasonable 50C down to about 41C when focus has been lost.
Just switched the WU's to see what would happen.
Temp came back up and the WU I switched to:
00:02:28 1.373% of task done. WU start time was 07:49:58 Current 07:35:43 which indicates that the WU's should complete in just under 3 hours with this GPU.
Switched back to the original WU and the GPU is now focused and completed the remaining 30 mins in less than 2 mins.
What could cause this?
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
For the short WUs, my 460 would complete 280658^524288+1 in 5:13:08. That's the current leading edge for the short WUs (i.e., the longest short WU sent out so far), and the time estimate should be accurate to about 1% or better.
I'm not sure off the top of my head how the 560ti compares to the 460, but unless the 560Ti is a slower card, 8 to 9 hours seems too long.
Just read your second post: the GPU shouldn't be turning on and off like that. You should check your BOINC settings, under Tools-->Computing preferences...:
On the Processor Usage tab:
Make sure "while processor usage is less than ## percent" is set to 0.
Also make sure "Use at most ##% of CPU time" is 100.00
Other settings on that tab will turn of computing, but that may be desirable depending on how you use the computer.
____________
My lucky number is 75898524288+1 |
|
|
|
Checked everything and it all seems OK.
The GPU is still hibernating, going to sleep or something else. Not sure what is happening.
Here is a screenshot of my settings showing the WU that is current.
The line just above BONIC Preferences and the V Tune dialogue boxes.
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
|
30 mins later
Everything is fine.
Moved screenshot to include temps and CPU usage in the bottom right of the shot.
Seems that while I am using my Puter focus remains for the GPU. As soon as I go off to do other stuff, it loses focus. Hummm
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
|
Do you have, by any chance, drivers 295 or 296 installed?
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
|
Good point.
How can I tell which drivers are installed?
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Do you have, by any chance, drivers 295 or 296 installed?
Those drivers would not cause this problem. They only prevent a WU from starting up and cause a computation error. They wouldn't cause the app to slow down.
Do you have a screen saver enabled? That could be causing the GPU to spend its time drawing pretty pictures on the screen rather than crunching.
Along the same line of thought, is your power scheme set to put the computer to sleep when you're away from the computer? GPUs that are powered off generally tend to run very slowly. :)
____________
My lucky number is 75898524288+1 |
|
|
|
Grrrr
Yes pretty pictures have been enabled on my PC.
Switched off as of now. :)
Latested screenshot which shows the WU is fine and ahead of the last one by quite a margin.
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
|
Sweet looks like the next WU is going to come in around the 2.93 hours as expected. :)
The first one didn't make that due to wrong settings which have been pointed out.
Thank you to everyone who has helped.
Lets see what happens over the next few days. I may need to draw upon your knowledge to refine this GPU.
Kind regards
The Knighty NI
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
|
Thanks for the suggestions and help everyone.
Everything is stable now and running as expected.
Not far from completing WU Numero 3 now, literally within the next 35 Mins :)
Target time of just shy of 3 hours per WU which I originally calculated, is now happening for the Generfer WU's :)
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
|
I'm sure this was raised before but I can't find the previous posts.
Both these tasks ran on the same pc at the same time but device 1 has somehow managed to use (or think it used) nearly an hour of cpu time:
http://www.primegrid.com/results.php?hostid=193489&offset=0&show_names=0&state=0&appid=16
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
I'm sure this was raised before but I can't find the previous posts.
Both these tasks ran on the same pc at the same time but device 1 has somehow managed to use (or think it used) nearly an hour of cpu time:
http://www.primegrid.com/results.php?hostid=193489&offset=0&show_names=0&state=0&appid=16
Never saw anything like that before! Not a clue what's causing it.
____________
My lucky number is 75898524288+1 |
|
|
rroonnaalldd Volunteer developer Volunteer tester
 Send message
Joined: 3 Jul 09 Posts: 1213 ID: 42893 Credit: 34,634,263 RAC: 0
                 
|
I'm sure this was raised before but I can't find the previous posts.
Both these tasks ran on the same pc at the same time but device 1 has somehow managed to use (or think it used) nearly an hour of cpu time:
http://www.primegrid.com/results.php?hostid=193489&offset=0&show_names=0&state=0&appid=16
Never saw anything like that before! Not a clue what's causing it.
Could be a driver issue (266.58), a normal hickup or something inside the OS...
I have something like that with Collatz on my GTS450. In the past only low CPU-time/usage but yesterday the same app uses a full CPU-core and Run time = CPU time.
Only difference is the driver, my Lubuntu11.10 installation updated to 295.40...
____________
Best wishes. Knowledge is power. by jjwhalen
|
|
|
|
Interested in having a go at finding a huge Prime of this category.
How can I tell if my GPU is double precision?
the card is an NVIDIA GeForce GTX 560 Ti
Thanks in advance for your help :)
____________
The art of flying is throwing yourself at the ground and missing.
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
How can I tell if my GPU is double precision?
the card is an NVIDIA GeForce GTX 560 Ti
The GTX 560 Ti is double precision.
Every CUDA-enabled card has a "Compute Capability" (or CC) level, and all cards with CC 1.3 or above can be used. The CC for each card is listed in the specifications for the card.
____________
My lucky number is 75898524288+1 |
|
|
|
I have a Genefer 1.07 CUDA32_13 running on my comp for 174hrs with 73hrs left. Is this a Generalized Fermat Prime Search WU? If so, why did I get it? I have set the long search to disable. Setting preference is for short task only! The WU was due on 11/16 at midnight so it will be long pass that when it's finished. And all my other GPU WU for other projects are now behind as well. What good is setting my preferences if the project ignores them. I have been gone for 4 days or I would have aborted this WU. I will really be pissed if this WU fails <angry face> |
|
|
|
Nope, that's just a regular short Genefer WU. It's taking such a long time because the 600-series NVidia cards are really bad at double precission arithmatic which is needed for genefers. Even the top of the line cards are, if I'm not mistaking, slower than the fastest cards from the 500-series. Seeing how you have one which is on the lower end of the 600s, yours is particularly slow.
Basically what can be concluded from this, if nothing else is hindering your speed, is that running genefers on low end 600s is best avoided as even the average CPU seems to be faster.
If you still want to use your GPU for primegrid I would suggest switching it over to PPS Sieve. This project only requires single precission arithmatic, something at which the 600 series are very good.
Oh and with regards to this unit: You will still get credit if you manage to get it done before it gets purged from the system. I'm not entirely sure when that'll happen, but I'm sure someone else will share that info :)
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Indeed, the GT 620 is a very slow GPU -- not just for double precision programs like Genefer, but for all programs. It's one of the "value" or "entry level" GPUs in the 600 series.
That being said, a 250 hour run-time for that work unit is surprisingly long -- the fastest GPUs can complete those about 100 times faster.
I'm wondering if there's something that's somehow interfering with the GPU and keeping it from running full speed?
____________
My lucky number is 75898524288+1 |
|
|
|
Indeed, the GT 620 is a very slow GPU -- not just for double precision programs like Genefer, but for all programs. It's one of the "value" or "entry level" GPUs in the 600 series.
That being said, a 250 hour run-time for that work unit is surprisingly long -- the fastest GPUs can complete those about 100 times faster.
I'm wondering if there's something that's somehow interfering with the GPU and keeping it from running full speed?
I did have the screensaver running which is now shut off. By slow, which specs should I be looking at for speed? The GT620 has a core clock of 700mhz and the GT560ti has a 900mhz core. Dosen't seem thats much of a difference. The memeory speeds are quite different though.
I have disabled the Genefer WUs. |
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2408 ID: 1178 Credit: 19,577,748,615 RAC: 11,711,407
                                                
|
The GT 620 is not a Kepler card. It is a rebranded 520 as I recall (i.e., a Fermi card). The most important thing to look at for card differences in CUDA performance is number of shaders (and then shader clock after that). As I recall a GT 620 has 48 shaders. Comparing that to a mid-range Fermi card like the GTX 460 (336 shaders) and you can see how substantial the performance difference would be.
Better projects for a card like yours would be the PPS sieve on BOINC or on PRPnet this card would do okay with the Wieferich or Wall-Sun-Sun prime searches.
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
I did have the screensaver running which is now shut off.
That is likely what was causing it to be so slow. My guestimate is that it should be able to run one of those work units in about 50 hours -- about the same as your CPU could do.
By slow, which specs should I be looking at for speed? The GT620 has a core clock of 700mhz and the GT560ti has a 900mhz core. Dosen't seem thats much of a difference. The memeory speeds are quite different though.
Scott's answer is correct -- it's the number of shaders that's most important. The 620 has 48 shaders, a mid-range card has about 400-500 shaders, and the top of the line have (I think) about 1500 shaders. A "shader" on a GPU is roughly analogous to a core on a CPU: the more shaders there are, the more calculations can be done simultaneously. It's the ability to run many calculations simultaneously that makes GPUs so fast, so the more shaders there are, the faster the GPU can crunch.
____________
My lucky number is 75898524288+1 |
|
|
|
I did have the screensaver running which is now shut off.
That is likely what was causing it to be so slow. My guestimate is that it should be able to run one of those work units in about 50 hours -- about the same as your CPU could do.
By slow, which specs should I be looking at for speed? The GT620 has a core clock of 700mhz and the GT560ti has a 900mhz core. Dosen't seem thats much of a difference. The memeory speeds are quite different though.
Scott's answer is correct -- it's the number of shaders that's most important. The 620 has 48 shaders, a mid-range card has about 400-500 shaders, and the top of the line have (I think) about 1500 shaders. A "shader" on a GPU is roughly analogous to a core on a CPU: the more shaders there are, the more calculations can be done simultaneously. It's the ability to run many calculations simultaneously that makes GPUs so fast, so the more shaders there are, the faster the GPU can crunch.
The gt620 has 96 stream processors. I can't find any info that says anyting about shaders. I understood that the stream processors were the GPUs. Yes it is on the low end but quite a step up from the gtx 8600 I had. It's the best I can afford at this time. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
The gt620 has 96 stream processors.
Looked it up... there's two different GT 620's. One with 48 shaders (which is actually a rebranded GT 520) and another -- the one you have -- with 96 shaders (which is a rebranded GT 530).
I can't find any info that says anyting about shaders. I understood that the stream processors were the GPUs.
The names "shader" and "stream processor" mean the same thing, in this context.
Yes it is on the low end but quite a step up from the gtx 8600 I had. It's the best I can afford at this time.
Yes, depending on which model of the 8600 you had, the 620 is likely at least twice as fast, maybe a bit more.
A word of caution for the future, however. In Nvidia's numbering scheme, the last two digits, i.e. the "20" in "620" are more important than the first digit (i.e., the "6") in gauging how powerful the card is. For example, a GTX 480 is a lot faster than a GTX 560. The first digit is the generation of the card, and, for sure, each successive generation is somewhat faster and more energy efficient than the previous generation, but the difference between the high-end and low-end cards within each generation is huge.
EDIT: Getting back on topic, with the screen saver off, how's the GPU doing with GeneferCUDA?
____________
My lucky number is 75898524288+1 |
|
|
|
Yes, depending on which model of the 8600 you had, the 620 is likely at least twice as fast, maybe a bit more.
A word of caution for the future, however. In Nvidia's numbering scheme, the last two digits, i.e. the "20" in "620" are more important than the first digit (i.e., the "6") in gauging how powerful the card is. For example, a GTX 480 is a lot faster than a GTX 560. The first digit is the generation of the card, and, for sure, each successive generation is somewhat faster and more energy efficient than the previous generation, but the difference between the high-end and low-end cards within each generation is huge.
EDIT: Getting back on topic, with the screen saver off, how's the GPU doing with GeneferCUDA?
Thank you for the info. I am hoping to upgrade to a much better computer with a good vid card this spring. I am waiting on a settlement that is taking it's sweet time. I wanted something that would work well in the interum. The 620 is holding it's own so far. I just got another Genefer WU so I will let you know how it does this time around. It did finish the last one with credits though it was bit late. :) I had several other CUDA WU time out though.Couldnt be helped. |
|
|
|
Give me please the part of code app_info.xml for receive GeneferX64 CPU tasks <app_info>
<app>
<name>...?...</name>
</app>
<file_info>
<name>primegrid_genefer_2_3_0_0_1.07_windows_x86_64.exe</name>
<executable/>
</file_info>
<app_version>
...???...
</app_version>
</app_info> |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Give me please the part of code app_info.xml for receive GeneferX64 CPU tasks<app_info>
<app>
<name>...?...</name>
</app>
<file_info>
<name>primegrid_genefer_2_3_0_0_1.07_windows_x86_64.exe</name>
<executable/>
</file_info>
<app_version>
...???...
</app_version>
</app_info>
I think this is what you need:
<app>
<name>genefer</name>
<user_friendly_name>Genefer</user_friendly_name>
<non_cpu_intensive>0</non_cpu_intensive>
</app>
and
<app_version>
<app_name>genefer</app_name>
<version_num>107</version_num>
<platform>windows_x86_64</platform>
<avg_ncpus>1.000000</avg_ncpus>
<max_ncpus>1.000000</max_ncpus>
<flops>2227345910.517703</flops>
<api_version>7.0.7</api_version>
<file_ref>
<file_name>primegrid_genefer_2_3_0_0_1.07_windows_x86_64.exe</file_name>
<main_program/>
</file_ref>
</app_version>
That's the info from my client_state.xml file.
____________
My lucky number is 75898524288+1 |
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1962 ID: 352 Credit: 6,336,802,620 RAC: 2,351,903
                                      
|
Anybody lucky with standard preference (not messing with app_info) to avoid getting GFN CPU task?
GPU is in this case ~17x faster comparing to CPU core.
My settings are: only PPS and SGS for CPU and only GFN for GPU yet I get some GFN CPU tasks. (send work from any subproject if selected projects have no work is disabled).
____________
My stats |
|
|
|
Anybody lucky with standard preference (not messing with app_info) to avoid getting GFN CPU task?
GPU is in this case ~17x faster comparing to CPU core.
My settings are: only PPS and SGS for CPU and only GFN for GPU yet I get some GFN CPU tasks. (send work from any subproject if selected projects have no work is disabled).
I've seen that happen, but only after restarting boinc with a clean cache. I solved it doing this:
1. On prefs page, uncheck "use nvidia gpu" and "use ATI GPU", and select only llr subprojects (the ones you want to run)
2. Open boinc and get a couple wu's. It may start to say "no selected work available". I so, do manual updates until you get what you want (it may take a few minutes).
3. After getting llr work, go back to prefs page, select GFN GPU and check the "use nvidia gpu" pane (leave only the "use ati gpu" unchecked).
4. Update boinc (you should get only gpu tasks this time.
After that, boinc only requests the selected work. Hope it works for you.
____________
676754^262144+1 is prime |
|
|
|
It seems to sometimes happen if CPU and GPU tasks are asked for at the same time. Because PPS and SGS are smaller they run out quite often and can correlate when the GFN task has finished and is asking for work.
It is a BOINC issue, not a PG issue. The BOINC programmers have not figured out a way to stop this from happening, yet.
Some people have stopped the CPU task from running by denying the CPU executable permissions to run, so when it tries to run it errors out immediately and sends the unit back so someone else can crunch it.
____________
My lucky numbers are 121*2^4553899-1 and 3756801695685*2^666669±1
My movie https://vimeo.com/manage/videos/502242 |
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1962 ID: 352 Credit: 6,336,802,620 RAC: 2,351,903
                                      
|
Thanks for your insights.
Will try that "feed CPUs first and then enable GPU" scenario when convenient.
____________
My stats |
|
|
|
PB wrote:
Some people have stopped the CPU task from running by denying the CPU executable permissions to run, so when it tries to run it errors out immediately and sends the unit back so someone else can crunch it.
This is what I do for PPS Sieve. Works like a champ. (I'm on boinc v. 6.10.xx)
Honza wrote:
Will try that "feed CPUs first and then enable GPU" scenario when convenient.
I don't think this will work in the long term, unless you can babysit BOINC 24/7. At some future date when the task completion times line up just so, BOINC may request CPU and GPU units together, and you may get unwanted GFN CPU work. Granted this is much less likely/frequent than with PPS Sieve since GFN work is much longer.
--Gary |
|
|
|
Gary,
You're quite right on the long run risk. However, adding the report immediately tag to the config file reduces that risk without the need for 24/7 babysitting and might increase the chances of being initial finder.
But the permitions restrain approach is a good idea. However, unlike pps sieve, it might prevent you from finding a huge prime, if you deliberately error a prized wu...
____________
676754^262144+1 is prime |
|
|
|
PB wrote:
Some people have stopped the CPU task from running by denying the CPU executable permissions to run, so when it tries to run it errors out immediately and sends the unit back so someone else can crunch it.
This is what I do for PPS Sieve. Works like a champ. (I'm on boinc v. 6.10.xx)
--Gary
How do you do that?
____________
676754^262144+1 is prime |
|
|
rroonnaalldd Volunteer developer Volunteer tester
 Send message
Joined: 3 Jul 09 Posts: 1213 ID: 42893 Credit: 34,634,263 RAC: 0
                 
|
How do you do that?
On Linux change the file permissions from 0777 to 0666 (means read and write for owner, group and all).
On Windows per right-klick on a file, properties, security and delete the combined permission for reading/execute. I think you must delete the permission "full access" before you can deselect the not needed permissions from a file.
____________
Best wishes. Knowledge is power. by jjwhalen
|
|
|
|
Anybody lucky with standard preference (not messing with app_info) to avoid getting GFN CPU task?
GPU is in this case ~17x faster comparing to CPU core.
My settings are: only PPS and SGS for CPU and only GFN for GPU yet I get some GFN CPU tasks. (send work from any subproject if selected projects have no work is disabled).
I had this problem. I simple let the GPU get "full" to the download limit, then update to get CPU only. Any excess GPU or GFN CPU can be sent back by aborting.
The problem only happens when your system requests both CPU and GPU tasks and BOINC system fills the GPU request first, tries to fill the CPU request but does not change the WUtype flag from the GPU setting.
It also happens with PPS SV wu
____________
Member team AUSTRALIA
My lucky number is 9291*2^1085585+1 |
|
|
|
Are you aware that 64-bit applications for Windows are more memory-efficient, especially under Windows Vista, than 32-bit applications, and therefore may be desirable even if they don't run any faster? 64-bit Windows uses various SYSWOW64 modules to let it run 32-bit applications. Under 64-bit Windows Vista, the memory occupied by these SYSWOW64 modules is approximately the same as the memory occupied by the 32-bit applications being run, but isn't counted as memory being used by BOINC. As a result, memory runs out before the 32-bit applications use half of the memory. 64-bit applications don't need and SYSWOW64 modules, and therefore more of them can run in the same amount of memory, if there are enough CPU cores. In 64-bit Windows 7, the SYSWOW64 modules are smaller, so you are less likely to run out of memory. I have not had a chance to check how 64-bit Windows XP and 64-bit Windows 8 handle this, or how 64-bit Linux and 64-bit Macs handle this. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Yes, I've been following the discussion on the BOINC mailing lists on that topic. Since we're talking about GPU applications, however, most hosts will be running only one instance of the program, so it's unlikely memory usage will be a problem.
As far our our CPU apps go, I think all of them now are either only available in 64 bit versions (GFN), or available in both 32 and 64 bit versions (LLR and the two sieves).
____________
My lucky number is 75898524288+1 |
|
|
|
Seems that the GFNWR task I am doing is only going to be half done by the deadline at 16-3-15. In saying this I still wish to complete it, and therefore will it still continue to the end after this date or am I just wasting my time continuing.
Really did not think it would take this long, and just thought the time was going to be not that accurate.
Hope someone can shed some light for me here, as it seems a waste since its been going for this long already. |
|
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
Keep going if you like. You'll still get credit. Can you post the link to the task? |
|
|
|
Assuming your work unit is ultimately declared "valid", you will receive BOINC credit for it as long as it is returned before being purged from the database, which is at least a couple of weeks after your deadline.
Your computers are hidden so I don't know what an "expected" run-time might be for you. I'll say that I have a gtx570 and gtx770 and they are in the 4-to-5+ day range running 24/7 on GFN-WR... maybe you can use that to estimate if your cards are performing appropriately or if something is wrong. There are some new GPUs that can run them in half(-ish) that time, but plenty of cards not much older than mine will take twice or more as long.
Good luck!
--Gary |
|
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 920 ID: 107307 Credit: 989,290,184 RAC: 192
                     
|
Looks like it's a laptop.
The computer looks to be:
GenuineIntel Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz [Family 6 Model 58 Stepping 9]
with GPU:
NVIDIA GeForce GT 630M (2048MB) |
|
|
|
Thanks guys.will keep going then.think its a gt630.guess that's going to be pretty slow then by how long its taking and maybe being a notebook doesn,t help either
Oh well.better to finish than waste the task again, as the other guy will want it validated for him, and its already been junked many times already
Actually I have been trying a lot of the different tasks, to see how well I can complete them, as I haven't been doing primegrid for very long.
I am finding it very interesting though, and could get some better hardware to improve performance
I guess the perils of not being very expert in these type of things is the big drawback, and you must learn as you go along.
|
|
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
I've only ever done 12 GFN WR tasks, each is a big accomplishment.
GFN WR
WU b time secs Run time Credit Date
276381 12762 72:27:52 260872 260737 560464.43 19/10/13 Double Check
276601 13842 73:56:43 266203 266159 565280.22 19/12/13 Double Check
361838 15766 73:24:48 264288 264289 572995.66 22/12/13 Initial Check
361916 16074 73:28:10 264490 254491 574142.6 25/12/13 Initial Check
362135 17086 74:22:51 267771 267761 577762.14 29/12/13 Initial Check
361934 16152 74:25:57 267957 267959 574429.58 01/01/14 Double Check
361953 16254 74:38:35 268715 268705 574802.77 03/01/14 Initial Check
362107 16982 74:08:48 266928 266929 577400.2 06/01/14 Initial Check
362320 17926 74:50:09 269409 269415 580607.24 11/01/14 Initial Check
362680 19680 75:25:14 271514 271514 586141.25 15/01/14 Initial Check
362710 19844 76:41:25 276085 276054 586633.22 18/01/14 Initial Check
383391 24086 77:06:32 277592 277342 598117.84 24/04/14 Initial Check |
|
|
|
Is there a list available on the website which b values I have done or is this visible to server admins only? |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Is there a list available on the website which b values I have done or is this visible to server admins only?
None of the above. The server doesn't normally keep that information beyond the purge date for the workunit.
____________
My lucky number is 75898524288+1 |
|
|
|
Further to Magpie's issue above: I have a GFNWR task (cuda) running on a GTX 750Ti that's currently been running for 242 hrs and is 64.3% complete, with the deadline pretty much 6 days away. Working backwards, that indicates that the task is about 16 full days' worth of crunching, and the 21 day deadline is quite tight for that.
Is it common for the GNFWR tasks to run this long, and so close to deadline? Is it only an issue with particular cards? I know the Prefs page gives an indication of 140ish hrs per task, so it seems strange that some experiences are so far beyond that. I do understand from the comments above that receiving credit shouldn't be an issue. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Further to Magpie's issue above: I have a GFNWR task (cuda) running on a GTX 750Ti that's currently been running for 242 hrs and is 64.3% complete, with the deadline pretty much 6 days away. Working backwards, that indicates that the task is about 16 full days' worth of crunching, and the 21 day deadline is quite tight for that.
Is it common for the GNFWR tasks to run this long, and so close to deadline? Is it only an issue with particular cards? I know the Prefs page gives an indication of 140ish hrs per task, so it seems strange that some experiences are so far beyond that. I do understand from the comments above that receiving credit shouldn't be an issue.
The 750 Ti is a pretty good gaming GPU, and it's a pretty good middle-of-the road GPU for Sieving (and most other GPU apps), but it's not that great at GFN.
GPUs vary GREATLY in speed, much more so than do CPUs.
So while some GPUs (with 4 digit price tags) may crunch these in 4 or 5 days, other GPUs (with 2 digit price tags) may take a month or more to do this -- even slower than a lot of CPUs.
It used to take me about 10 days, maybe a bit more, to run these tasks on my old GTX 460. That GPU is *faster* (for GFN) than the much newer GTX 750 Ti. The second digit indicates where in the product line the GPU falls, so the x6x GPUs are more powerful than the x5x GPUs. The first digit (7xx) indicates the generation of the GPU, with higher numbers being newer. Normally, newer is better, but not necessarily. Double precision speed is what's important for GFN calculations, and Nvidia *lowered* the double precision power in more recent GPU architectures. So, other things being equal, x6x is faster than x5x, and 4xx (and 5xx) are faster than 7xx. The 750 Ti just isn't all that fast for this type of calculation. 16 days doesn't seem totally unreasonable.
These tasks are BIG, and we do *not* expect every computer to be able to complete them within the deadline. Yours can, but anything with an x4x or lower GPU probably can't. That's why we also offer the shorter GFN tasks as an alternative. Because the deadline for the short tasks are based on the speed of an older CPU, even a slow GPU will have no trouble with those deadlines.
While we try to make most of the projects usable by everybody's computer, the operable word there is "most". This is one of the ones that's not. You need a fairly decent computer to complete these tasks within the deadline. The middle of the road GPUs (like yours) can do it, but they don't have a lot of leeway. Middle-low GPUs can barely make it. Bargain GPUs can't.
____________
My lucky number is 75898524288+1 |
|
|
|
Thanks for the info Michael. Yeah I know running tasks on a mid-range card will end up with varying results, and it would be unreasonable to expect you guys to make it run well for everyone. Pretty sucky about Nvidia dropping double-precision speed, but not much to be done about that. Thanks for the info and thanks for the work you guys do; it's good to be a part of this project. |
|
|
|
I have a GTX580, 2 780ti's and 2 980ti's. Are they fast enough for the new challenge? They do fine with the PPS Sieve tasks.
Thanks. |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 917 ID: 370496 Credit: 593,422,236 RAC: 543,979
                         
|
I have a GTX580, 2 780ti's and 2 980ti's. Are they fast enough for the new challenge? They do fine with the PPS Sieve tasks.
Thanks.
Yes they are. For the 980tis, though, you might want to consider using the app_info file so you can do GFN-WR tasks using the OCL3 app, which is much faster on Maxwell cards than regular OCL (the one Boinc is using right now). |
|
|
|
using the app_info file so you can do GFN-WR tasks using the OCL3 app, which is much faster on Maxwell cards than regular OCL (the one Boinc is using right now).
How would the command line in the app_info file look like? I am not able to find an example of the app_info file for telling BOINC to use OCL3 for the GFN-WR tasks. I do have two GTX970 and would like to participate in the challenge.
And an additional question: Does anybody know, why only PRIMEGRID (Neither PPS Sieve, nor GFN) does not recognize the GPU part of my AMD 7870K? PRIMEGRID does work with the four CPUs flawlessly, but does not see the integrated GPU. However POEM, Einstein, SETI, Collatz Conjecture all do see the GPU on the chip. Any suggestions? |
|
|
Yves Gallot Volunteer developer Project scientist Send message
Joined: 19 Aug 12 Posts: 834 ID: 164101 Credit: 306,388,940 RAC: 4,256

|
And an additional question: Does anybody know, why only PRIMEGRID (Neither PPS Sieve, nor GFN) does not recognize the GPU part of my AMD 7870K? PRIMEGRID does work with the four CPUs flawlessly, but does not see the integrated GPU. However POEM, Einstein, SETI, Collatz Conjecture all do see the GPU on the chip. Any suggestions?
Genefer OCL3 should detect the device and run...
You can download it at
https://www.assembla.com/spaces/genefer/subversion/source/HEAD/trunk/bin/windows
and start the test.
It runs on my old "AMD Mobility Radeon HD 5650, 'Madison', Jan 2010":
Running on platform 'AMD Accelerated Parallel Processing', device 'Redwood', version 'OpenCL 1.2 AMD-APP (1800.11)' and driver '1800.11 (VM)'.
|
|
|
|
I've downloaded the Windows OCL3 app but can't seem to get the details quite right - can someone please post a sample app_info to run WR tasks?
TIA - Steve |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 917 ID: 370496 Credit: 593,422,236 RAC: 543,979
                         
|
I've downloaded the Windows OCL3 app but can't seem to get the details quite right - can someone please post a sample app_info to run WR tasks?
TIA - Steve
Not sure if this one works or not, but I found this lying around my PC:
<app_info>
<app>
<name>GeneferWR</name>
</app>
<app_version>
<app_name>GeneferWR</app_name>
<version_num>309</version_num>
<coproc>
<type>CUDA</type>
<count>1</count>
</coproc>
<plan_class>cuda</plan_class>
<avg_ncpus>1</avg_ncpus>
<max_ncpus>1</max_ncpus>
<file_ref>
<file_name>geneferocl3_windows.exe</file_name>
<main_program/>
</file_ref>
</app_version>
</app_info> |
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1962 ID: 352 Credit: 6,336,802,620 RAC: 2,351,903
                                      
|
I've downloaded the Windows OCL3 app but can't seem to get the details quite right - can someone please post a sample app_info to run WR tasks?
See Can't get tasks for AMD R9 290X thread, there are samples out there.
____________
My stats |
|
|
|
I had already put an app_info together but it only runs on iGPU and not the GTX 970
- I was being lazy and seeing if someone had one that worked.
The 280X thread does not have the specifics for OCL3 w/ Nvidia.
I've opened a new thread so as to not sully this one any more :-)
Request for help with Genefer app_info:
http://www.primegrid.com/forum_thread.php?id=6585&nowrap=true#90778 |
|
|
|
Can I use GeForce 9600M GS for GFN-15 and GFN-16 ? |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
Can I use GeForce 9600M GS for GFN-15 and GFN-16 ?
Wow, that's pretty ancient.
I *think* it will, if the video driver supports OpenCL. (I think that driver supports OpenCL, although the release notes don't actually say so.)
It's going to be MUCH slower than your other GPUs, but I'm sure you already know that. (By my calculations, it will be more than 300 times slower than your 1050 Ti. The difference on GFN15 and 16 should be smaller, however, because the big GPUs aren't as efficient on small tasks like those.) The Core 2 CPU might actually be able to run those tasks faster than the GPU can.
Go ahead and try it. Either it will work (albeit slowly), or it won't.
____________
My lucky number is 75898524288+1 |
|
|
|
Thank you! |
|
|
|
The last reference to overclocking problems in this thread is from 2012. Is that still a large concern? I have a factory overclocked 980 that has always been stable for me, but don't know if instability that is enough to throw off Genefer is something I would notice otherwise. Beyond just running some really long ones and seeing how it goes, is there anything I can do to check GPU stability? |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 917 ID: 370496 Credit: 593,422,236 RAC: 543,979
                         
|
The last reference to overclocking problems in this thread is from 2012. Is that still a large concern? I have a factory overclocked 980 that has always been stable for me, but don't know if instability that is enough to throw off Genefer is something I would notice otherwise. Beyond just running some really long ones and seeing how it goes, is there anything I can do to check GPU stability?
OCing isn't so much of a problem as it was in days past, you can get some boost out of an overclock. But, it's still a concern: usually, you have to clock it lower than you would for your typical game.
If you want to check for stability, you should either do tasks that someone else already completed (so you have your result as soon as possible) or just download the program and run it manually, seeing if you get proper results. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14014 ID: 53948 Credit: 468,507,646 RAC: 683,945
                               
|
The last reference to overclocking problems in this thread is from 2012. Is that still a large concern? I have a factory overclocked 980 that has always been stable for me, but don't know if instability that is enough to throw off Genefer is something I would notice otherwise. Beyond just running some really long ones and seeing how it goes, is there anything I can do to check GPU stability?
Probably not -- but I have not changed the recommendation out of an abundance of caution. Usually it's pretty obvious when you've got too much overclocking because many of your results will be inconclusive or invalid.
So give it a try. You'll know soon enough if there's a problem.
____________
My lucky number is 75898524288+1 |
|
|
|
Thanks! |
|
|
|
The last reference to overclocking problems in this thread is from 2012. Is that still a large concern? I have a factory overclocked 980 that has always been stable for me, but don't know if instability that is enough to throw off Genefer is something I would notice otherwise. Beyond just running some really long ones and seeing how it goes, is there anything I can do to check GPU stability?
Probably not -- but I have not changed the recommendation out of an abundance of caution. Usually it's pretty obvious when you've got too much overclocking because many of your results will be inconclusive or invalid.
So give it a try. You'll know soon enough if there's a problem.
I overclock my GTX 650ti for pps sieve. I switched to gfn16 for the TDP and forgot to remove the overclock; roughly half of the tasks were marked as invalid. I'd say overclock is still a thing when running GFN.
____________
676754^262144+1 is prime |
|
|
|
I have found it to be extremely card-dependent for Nvidia. A bit less for AMD.
I have a 10% (100mhz) overclock on my 7990s, at least in the wintertime, a 50mhz o/c on a 290x, and 50mhz on a 280x, all with stock voltages. All of which have no issue with GFN, so long as all their fans are working :)
I run with zero o/c on my Maxwell and earlier Nvidia GPUs. I have had some success with Pascal GPUs but it seems to be very GPU and project-specific, meaning you'll have to try it. |
|
|