SOS: Save Our Science!
NASA's acclaimed Science Program -- the heart and soul of the U.S. space agency -- is in danger!
In spite of its great promises of a "Vision for Space Exploration",the U.S. administration has submitted a shameful 5-year budget thatwill devastate NASA's science efforts. It slashes over $3 billion fromcrucial science programs in favor of paying for unplanned andunbudgeted new costs for an expensive Space Shuttle program that isscheduled to be phased out by 2010.
If this disastrous budget stands, NASA will have no "Vision" atall. As one former NASA official put it, "Exploration without Scienceis just tourism"...and that is exactly what this budget promises.
Space exploration is a worldwide enterprise, and NASA's programserves all Earth's people. These catastrophic cuts will rob the entireworld of unique chances to explore our solar system.
Right now, we have a simple choice: we can wring our hands whileNASA's most daring and productive efforts are strangled…or we canfight! We can stop the U.S. administration from decimating Science andvirtually destroying one of the world's premiere exploration programsfor at least a decade -- and perhaps for our lifetimes.
Take Action Today!Help launch the Society's SOS: Save Our Science Campaign. Simply take the following three steps right away:
Sign the Petition >>
U.S. citizens can use our Legislative Action Center.
What will happen if this budget is passed?
The Europa Mission- DEAD.
This long-sought mission -- actually mandated by Congress last year-- would have explored one of our best shots at finding life beyondEarth. The Terrestrial Planet Finder - DEAD.
TPF would have enabled us to find Earth-like worlds in distantsolar systems -- to actually see continents and seasonal changes onother planets.
The Stratospheric Observatory for Infrared Astronomy (SOFIA)- DEAD
Years of international preparation lost.
Mars Sample Return mission - DEAD. Two Mars Scout missions scheduled for after 2011...dead. The Mars Telecommunications Orbiter...dead.
University research funding - CUT 15%
Astrobiology - GUTTED by 50%
I was just wondering what stats people had got to?
Personally, mine are as follow:
|Total CPU Time||1.838 years|
|Average CPU Time per work unit||7 hr 35 min 23.7 sec|
|Average results received per day||1.65|
|SETI@home user for||3.517 years|
|Your rank out of 5,436,301 total users is||169,981st place|
|The number of users who have this rank||71|
|You have completed more work units than||96.872% of our users|
I'd have liked to have hit 2,500 units before close, but I'll take that.
Of course, according to the top users list, the top user Serhat SUT - Turk Seti Team reached 8,548,600 units.
Anyway, anyone else want to share? How did we all fare at close of play?
December 13, 2005 - 06:00 UTC
Okay - we're out of the woods as far as the current server issues. As with most things around here, the actual problem was well disguised and the eventual solution simple in essence.
Early monday morning, December 5, we started dropping connections to our upload/download server (kryten). See the posts below for more exposition. We shuffled services around, added a web server to the entire fray, tuned file systems, tweaked apache settings, all to no avail. We checked if we were being DOS'ed - we were not. Was our database a bottleneck? No.
Progress was slow because we were also fighting with the master database merge. And every fix would require a reboot or a lengthy waiting period to see if positive progress had been made.
By Friday we were out of smoking guns. At this point kryten was only doing uploads and nothing else - reading from sockets and writing files to local disk. What was its problem? We decided to convert the file_upload_handler into a FastCGI process. I (Matt) applied the conversions and Jeff figured out how to compile it, but it wasn't working. We left it for the weekend, and shut off workunit downloads to prevent aggravating the upload problem with more results in the mix.
When we all returned on Monday, David made some minor optimizations of the backend server code (removing a couple excess fstats) and I finally remembered that printf(x) and fprintf(stdout,x) are two very different things according to FastCGI. We got the file_upload_handler working as a FastCGI this afternoon.
We weren't expecting very much, since the file_upload_handler doesn't access the database. It basically just reads a file from a socket and writes it to disk. So the FastCGI version would only save us process spawning overhead and that's it.
But that was more than enough. We were handling only a few uploads a second before, but then with the FastCGI version handled over 90 per second right out of the box. Within a couple hours we caught up from a week of backlog. Of course, this put new pressure on the scheduler as clients with uploaded results want more work. We imagine everything will be back to normal come morning. We're leaving several back-end processes off overnight just to make sure.
Meanwhile the master database merge is successfully chugging along, albeit slowly, in the background.
I have 10 machines running, and all of them are stuck in limbo because I can up or download any work units. New or completed.
Ok. So I FINALLY move from the old platform to BOINC.
I have 3 machines here, all attached to the same Linksys switch. 1 of the machines is able to connect via BOINC and up/dl the work units.
My other 2 machines keep giving me "FAILED TO ATTACH TO PROJECT" in the BOINC manager.
Any ideas of what's keeping me from getting work from the server.
No firewall (hw or sw). All 3 machines similar. Win XP as the OS. 256M ram. On the 2 non connect machines, EVERYTHING works fine otherwise. BOINC is the *ONLY* problem. I have the latest version .