NFS-Challenge 21.06. bis 26.06.

Hi Carlos Pinho
Today it is not so hot like the last days. Therefor I have switched on some computer and crunch now NFS@Home for a while.
On saturday we get 37 C again. Maybe, i have to switch off.
I think the L'Alliance Francophone Team has the same problem with the temperatur. The hot air comes over France to us. ;)
 
I know. Here in the UK we had last week a heat wave coming from the south of Europe. When I arrived from overseas (Brazil) in London was with 35/37 ºC (record from last 30 years).
Thank for the help, we appreciate it.
 
Hi Carlos, Is there a possibility to run the lasieve5f application also on Windows computers?
 
Thanks. O.K., its running on my FX-8120 - 8 threads - its using more than 7 of 8GB RAM :-/
I'm still watching and may be I have to reduce to 6 Threads (or to put in more RAM modules :D)
 
Each instance will use up to 1GB, depends on the position of the range sieved. Right now it's using 850MB/thread on my machines.
 
That's right - I see the same on my machines. With 8GB and 7 Threads it is working smoothly.

--- Update ---

So, ab heute Anbend werde ich dann wieder ein wenig reduzieren - der FX war eigentlich gar nicht geplant und geht deshalb wieder raus, statt dessen geht dann ein Athlon 5350 rein mit (wegen Speichermangel leider nur) 3 Threads.
 
Dafür hab ich mal 12 Kerne samt 32GB RAM ins NFS-Rennen geschickt
 
Thank you for the current CPU power dedicated to NFS@Home. Can I request if utg can be reached to help us? I think utg can easily make more than 1M Boinc points per day.

Cheers,

Carlos
 
Thank you for the current CPU power dedicated to NFS@Home. Can I request if utg can be reached to help us? I think utg can easily make more than 1M Boinc points per day.

Cheers,

Carlos
That may be, but utg is not an active forum user. No one of us knows him. He's a ghost, a shadow, but a very good helper.
 
That may be, but utg is not an active forum user. No one of us knows him. He's a ghost, a shadow, but a very good helper.

Thank you.

In the meantime Syracuse University ramped up its numbers since a week ago and you guys are doing great. You are delivering a daily extra ~3000 16e Lattice Sieve V5 wus, that's awesome.

So thank you Planet 3DNow! team and keep crunching!!!

Carlos
 
Just to give an update on the current sieve of 6,490+ (G6p490 files).
G means GNFS (general number field sieve) and "p" stands for plus (+).

Currently it was set to sieve up to q=2000M meaning that the workunits will go up to G6p490_2000000. We are currently at ~G6p490_1590000, which means we still have to process (2000000-1590000)/2000) wus (205k wus) + aborted ones. Overall is the number you see on "Unsent" for 16e Lattice Sieve V5. The sieve is the second stage to factor a number. After the sieve, we will have all data to try to create a matrix (linear algebra phase) so the post-processing (third stage) can be ran in parallel on a big cluster at University. The cluster used is usually a 512 node one with infiniband connection and we use it because he have a grant for it and it is impossible to run it through BOINC. This stage can take up to 1-2 months in a row (depends on several variables).

For next number, 2,1285-, we will need to process something like 1M wus.
 
Seems to be never ending work
 
2,1285- sieve has started. We will go up to q=2200M or 2400M, meaning we will have 1.09M wus to 1.19M wus to process. Can we sieve this integer in time record?!

For 6,490+ we hope to stop at q=1800M.

Thank you all for the CPU power deployed on this project.

Best Regards,

Carlos
 
Zuletzt bearbeitet:
Thanks Carlos Pinho for the update.
That sounds good for eight core with 8 GB Ram.
 
joe, right now it is consuming 600 MB per thread so within 1 week we will reach again the 850MB mark. In the meantime looks like LAF team is moving cores to NFS@Home because SkyNet POGS server dried out.
This is the perfect time to hit harder the NFS server.
 
Zuletzt bearbeitet:
Sorry for the late reply. Some comments from Greg the Admin.

1) Server issues: "I think I tracked down the database slowdown. Turns out that the size of the database, specifically the host table, has grown substantially thanks to Syracuse’s 1.8 million host entries. It became larger than the cache assigned to it, so records were being read from disk rather than memory. That’s bad. I increased the max cache size to much larger than the table, so things should be back to normal now."

2) Records: "As far as setting a new record, the current lasieve16e binaries aren’t up to the task. They came close to running out of steam on our current records."
Also " I need to take a look at the full data set I have now and set priorities for the next grant proposal"

So we are trying to check from the Cunningham Tables and on other projects where we can be of assistant inside the limits of the sievers, closer to record pinch.

Thank you all for your support.
I'll let you know what will be the next integer to sieve.

Keep crunching, we are still far away to finish 2,1285-, more or less 850k wus.

Carlos
 
Name: NFS@home Air Strike on lasieve5f Application
Status: Upcoming
Project: NFS@Home
Issued by: SETI.USA
Start time: 2015-08-28 00:00 UTC
End time: 2015-09-02 00:00 UTC
Late entrants allowed? Yes U

http://boincstats.com/en/stats/challenge/team/chat/704

http://escatter11.fullerton.edu/nfs/prefs.php?subset=project

Run only the selected applications lasieved - app for RSALS subproject, uses less than 0.5 GB memory: no
lasievee - work nearly always available, uses up to 0.5 GB memory: no
lasievef - used for huge factorizations, uses up to 1 GB memory: no
lasieve5f - used for huge factorizations, uses up to 1 GB memory: yes
 
Hello Carlos,
thanks for the info, but from 08/15/2015 - 08/29/2015 is the SETI WOW Event from Seti Germany.

http://www.seti-germany.de/Wow/44_en_Welcome.html


That means less power for NFS for this time.

Sorry for the bad school english.

Greetings
joe
 
If the US-boy`s come with an airstrike we have to set up our bunker *attacke*
 
Zurück
Oben Unten