Table of Contents
- Response Summary
- Respondent Demographics
- Overall Satisfaction and Importance
- All Satisfaction, Importance and Usefulness Ratings
- Hardware Resources
- Software
- Security and One Time Passwords
- Visualization and Data Analysis
- HPC Consulting
- Services and Communications
- Web Interfaces
- Training
- Comments about NERSC
Hardware Resources
- Legend
- Hardware Satisfaction - by Score
- Hardware Satisfaction - by Platform
- Max Processors Effectively Used on Seaborg
- Hardware Comments
Legend:
|
|
Hardware Satisfaction - by Score
7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied
Item | Num who rated this item as: | Total Responses | Average Score | Std. Dev. | Change from 2003 | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 | |||||
HPSS: Reliability (data integrity) | 5 | 16 | 97 | 118 | 6.74 | 0.67 | 0.13 | ||||
HPSS: Uptime (Availability) | 1 | 3 | 1 | 25 | 89 | 119 | 6.66 | 0.70 | 0.12 | ||
HPSS: Overall satisfaction | 1 | 4 | 2 | 33 | 84 | 124 | 6.56 | 0.80 | 0.10 | ||
PDSF: Overall satisfaction | 3 | 1 | 11 | 31 | 46 | 6.52 | 0.84 | 0.11 | |||
Network performance within NERSC (e.g. Seaborg to HPSS) | 1 | 2 | 8 | 40 | 75 | 126 | 6.46 | 0.85 | -0.08 | ||
PDSF: Uptime (availability) | 1 | 3 | 2 | 11 | 30 | 47 | 6.40 | 0.99 | 0.05 | ||
HPSS: Data transfer rates | 1 | 3 | 8 | 41 | 66 | 119 | 6.40 | 0.84 | |||
PDSF: Batch queue structure | 4 | 3 | 13 | 25 | 45 | 6.31 | 0.95 | 0.31 | |||
SP: Uptime (Availability) | 3 | 1 | 4 | 6 | 6 | 54 | 92 | 166 | 6.26 | 1.20 | -0.16 |
HPSS: Data access time | 1 | 1 | 1 | 6 | 8 | 40 | 61 | 118 | 6.25 | 1.08 | -0.21 |
HPSS: User interface (hsi, pftp, ftp) | 3 | 8 | 12 | 41 | 53 | 117 | 6.14 | 1.02 | 0.16 | ||
Remote network performance to/from NERSC (e.g. Seaborg to your home institution) | 1 | 2 | 6 | 2 | 17 | 57 | 70 | 155 | 6.12 | 1.15 | -0.00 |
SP: Disk configuration and I/O performance | 2 | 8 | 11 | 13 | 53 | 60 | 147 | 5.94 | 1.28 | -0.21 | |
PDSF: Batch wait time | 1 | 6 | 5 | 19 | 14 | 45 | 5.87 | 1.08 | -0.06 | ||
SP: Seaborg overall | 4 | 7 | 7 | 2 | 26 | 62 | 60 | 168 | 5.77 | 1.47 | -0.66 |
PDSF: Ability to run interactively | 1 | 5 | 5 | 3 | 16 | 17 | 47 | 5.68 | 1.45 | -0.09 | |
PDSF: Disk configuration and I/O performance | 1 | 5 | 4 | 6 | 13 | 15 | 44 | 5.59 | 1.45 | -0.10 | |
Vis server (Escher) | 8 | 1 | 3 | 7 | 19 | 5.47 | 1.39 | 0.24 | |||
SP: Ability to run interactively | 3 | 5 | 12 | 24 | 22 | 52 | 38 | 156 | 5.34 | 1.50 | -0.23 |
Math server (Newton) | 1 | 8 | 1 | 4 | 3 | 17 | 5.00 | 1.32 | -0.20 | ||
SP: Batch queue structure | 17 | 9 | 18 | 17 | 30 | 53 | 20 | 164 | 4.66 | 1.85 | -1.03 |
SP: Batch wait time | 26 | 16 | 36 | 14 | 27 | 32 | 10 | 161 | 3.84 | 1.90 | -1.40 |
Hardware Satisfaction - by Platform
7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied
Item | Num who rated this item as: | Total Responses | Average Score | Std. Dev. | Change from 2003 | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 | |||||
SP: Uptime (Availability) | 3 | 1 | 4 | 6 | 6 | 54 | 92 | 166 | 6.26 | 1.20 | -0.16 |
SP: Disk configuration and I/O performance | 2 | 8 | 11 | 13 | 53 | 60 | 147 | 5.94 | 1.28 | -0.21 | |
SP: Seaborg overall | 4 | 7 | 7 | 2 | 26 | 62 | 60 | 168 | 5.77 | 1.47 | -0.66 |
SP: Ability to run interactively | 3 | 5 | 12 | 24 | 22 | 52 | 38 | 156 | 5.34 | 1.50 | -0.23 |
SP: Batch queue structure | 17 | 9 | 18 | 17 | 30 | 53 | 20 | 164 | 4.66 | 1.85 | -1.03 |
SP: Batch wait time | 26 | 16 | 36 | 14 | 27 | 32 | 10 | 161 | 3.84 | 1.90 | -1.40 |
HPSS: Reliability (data integrity) | 5 | 16 | 97 | 118 | 6.74 | 0.67 | 0.13 | ||||
HPSS: Uptime (Availability) | 1 | 3 | 1 | 25 | 89 | 119 | 6.66 | 0.70 | 0.12 | ||
HPSS: Overall satisfaction | 1 | 4 | 2 | 33 | 84 | 124 | 6.56 | 0.80 | 0.10 | ||
HPSS: Data transfer rates | 1 | 3 | 8 | 41 | 66 | 119 | 6.40 | 0.84 | |||
HPSS: Data access time | 1 | 1 | 1 | 6 | 8 | 40 | 61 | 118 | 6.25 | 1.08 | -0.21 |
HPSS: User interface (hsi, pftp, ftp) | 3 | 8 | 12 | 41 | 53 | 117 | 6.14 | 1.02 | 0.16 | ||
PDSF: Overall satisfaction | 3 | 1 | 11 | 31 | 46 | 6.52 | 0.84 | 0.11 | |||
PDSF: Uptime (availability) | 1 | 3 | 2 | 11 | 30 | 47 | 6.40 | 0.99 | 0.05 | ||
PDSF: Batch queue structure | 4 | 3 | 13 | 25 | 45 | 6.31 | 0.95 | 0.31 | |||
PDSF: Batch wait time | 1 | 6 | 5 | 19 | 14 | 45 | 5.87 | 1.08 | -0.06 | ||
PDSF: Ability to run interactively | 1 | 5 | 5 | 3 | 16 | 17 | 47 | 5.68 | 1.45 | -0.09 | |
PDSF: Disk configuration and I/O performance | 1 | 5 | 4 | 6 | 13 | 15 | 44 | 5.59 | 1.45 | -0.10 | |
Network performance within NERSC (e.g. Seaborg to HPSS) | 1 | 2 | 8 | 40 | 75 | 126 | 6.46 | 0.85 | -0.08 | ||
Remote network performance to/from NERSC (e.g. Seaborg to your home institution) | 1 | 2 | 6 | 2 | 17 | 57 | 70 | 155 | 6.12 | 1.15 | -0.00 |
Vis server (Escher) | 8 | 1 | 3 | 7 | 19 | 5.47 | 1.39 | 0.24 | |||
Math server (Newton) | 1 | 8 | 1 | 4 | 3 | 17 | 5.00 | 1.32 | -0.20 |
What is the maximum number of processors your code can effectively use for parallel computations on Seaborg? 140 responses
Num Procs | Num Responses |
---|---|
6,000+ | 6 |
4,096+ | 4 |
4,096 | 6 |
2,048-4,096 | 2 |
2,048 | 9 |
1,024-2,048 | 11 |
1,024 | 16 |
512-1,024 | 5 |
512 | 16 |
256-512 | 3 |
256 | 11 |
128-256 | 5 |
128 | 15 |
64-128 | 3 |
64 | 9 |
32-64 | 3 |
32 | 5 |
16 | 7 |
<16 | 4 |
Hardware Comments: 51 responses
One user made the general comment that hardware is "very stable and satisfactory"; 50 other users commented on specific systems.
Comments on NERSC's IBM SP and computational requirements
[Read all 37 responses]
21 | Turnaround too slow |
16 | Queue /job mix policies should be adjusted |
11 | Seaborg needs to be upgraded / computational requirements |
5 | GPFS and upgrade problems |
4 | Provide more interactive and debugging resources |
3 | Needs more disk |
Comments on NERSC's PDSF Cluster
[Read all 10 responses]
6 | Problems with login nodes / slow access |
3 | Disk vault comments |
2 | Long job waits / need more cycles |
Comments on NERSC's HPSS Storage System
[Read all 4 responses]
2 | Needs better user interfaces |
1 | Good capacity |
1 | Will use soon |
Comments on Networking Performance
[Read all 4 responses]
Comments on NERSC's IBM SP and computational requirements: 37 responses
- Turnaround too slow: 21 responses
-
For some reason, the recent turnaround times on seaborg have been atrocious. This doesn't seem to be a hardware problem, the machine is always up. The turnaround is so bad that my group's computers are now 'faster' than seaborg, which is totally unexpected and I don't understand why this is the case. My seaborg usage dropped for that reason, if I have to wait a week for a job it's faster to run them on PCs. One of my students constantly has this turnaround problem and just gave up on seaborg. I've never seen it like that before.
Seaborg had been a joy to use for several years, much better than other high performance systems. But in the last few months the scheduling system has been much harder to use, and my group has had a hard time getting our main jobs on the machine.
The batch wait times are too long. NERSC needs to provide good turnaround to the majority of its users, not the minority. NERSC needs a mix of platforms to support small and large MPP jobs.
Seaborg has now become too "small" to handle so many users running jobs of 1,024 processors or more. The batch wait time is now much too long.
Seaborg hardware seems to be OK. The problem is the wait time and queue management. ...
The queue wait time is abysmal.
Strong dissatisfaction with Seaborg because it has been difficult to work on computing projects since the end of June, due to the batch queue priorities. My main projects have been on hold since then. ...
Batch wait times are getting quite long. This really diminishes the worth of the resource.
Extraordinarily long delays in the queues have made seaborg almost unusable at times. NERSC seems to have lost sight of its mission of providing a resource that meets _user's needs_ to advance energy research. ...
Only issue to with long wait times for moderate (less than 32 node) runs during 2004. These have been very long (2 weeks) in some cases. Decreasing my run time helped.
Seaborg is very oversubscribed. Queue wait times are long.
The batch wait time is at times very long. I think 2-3 days is reasonable, but 5 is unacceptable. Our group is typically awarded hundreds of thousands of hours and getting all of time used is reliant on the number of jobs we are able to get through the queue. Since we run on smaller numbers of processors, we need to get a larger number of jobs through to meet the quota.
- Queue / job mix policies should be adjusted: 16 responses
-
More capacity calculations for real science. Put more emphasize (higher priority) on medium size jobs where most science are done.
The large number of processors on Seaborg are useless to me. Given the amount of computer time that I am allocated, I cannot run large codes that use more than one node. I think that NERSC should consider the needs of users who are not allocated enough time to make use of the large number of nodes on Seaborg.
NERSC is now completely oversubscribed. The INCITE program has been a disaster for the average user. INCITE had pretty much taken over the machine before the upgrade. ...
The queue structure is so ludicrously biased toward large jobs that it is sometimes impossible to use one's time with a code that is optimum at 128-256 processors. That limit is set by the physics of the problem I'm solving, and no amount of algorithmic tinkering or optimization is going to change it much. NERSC gave my research group time in response to our ERCAP request, but to actually use the time we won, we wind up having to pay extra to use the express queue. Otherwise we would spend a week or more waiting in the regular queue each time we need to restart and job, and we'd never actually be able to use the time we were granted. I understand that NERSC wants to encourage large jobs, but the current queue structure guarantees that anyone who can't scale to 1000 processors is going to have to use the premium queue to get anything done.
I do not understand why the batch structure in seaborg discourages batch jobs using moderate amount of processors. Isn't the queue already long enough?
... Allowing various groups (often doing non-energy related research) to "jump the queue" is frustrating and bewildering. Favoritism towards "embarrassingly parallel" codes employing huge numbers of processors on Seaborg makes sense only if NERSC can provide adequate resources elsewhere (including fast inter-proc communication, not just a beowulf cluster) for smaller jobs. Again, local clusters are not really a solution to this problem, because the codes in question use limited numbers of procs in the first place because they are communication intensive - moving these codes from machines with fast interconnects like seaborg to local clusters using myrinet etc is strongly damaging to performance.
... The summer clog up problem is due to mismanagement (from MICS/OMB constraints): The 50% discount on >512ps jobs plus the "head of the line" priority given to 3 INCITE PI's, blocked use by nearly everyone else. Wait time on 48hr 512ps jobs was more than 3 weeks in Sept. NERSC has persistently over-allocated the machine. A more moderate priority (like 2 day age priority) would have been adequate. Seaborg need a complete rethinking of the batch queue system.
Time spent waiting in Seaborg queues has increased in the past year. Is this from greater use of the system or less efficient queuing? It would be nice to have this issue addressed, with either greater hardware resources or better system configuration. ...
Seaborg is seriously oversubscribed this year and is much less useful to the vast majority of users than in previous years. Policies have been severely distorted in order to meet milestones and this has been a great dis-service to almost all users of seaborg. It is very important to communicate to those who set the milestones that users are not well served by requiring that seaborg mainly run very large jobs. Very few users or projects are served well (even those that can run large jobs) by such policies. Raw allocation (CPU-hours) is the most important thing for most users, and users are in the best position to determine the optimum number of CPUs to use.
I'm getting less research accomplished now that the seaborg queues give much higher priority to larger jobs (512 and higher) because to get better turn-around everyone tries to run jobs with large numbers of processors and so fewer jobs can run simultaneously and so the queue times have become very long, as you surely know. Many projects that involve cutting edge research are attempting to run in parameter regimes never before attempted and so require close monitoring and typically many trial runs. It is not efficient to do this with large jobs that sit in the queue for more than a week. For my current projects I would prefer to run a 12 hour job with, say, 128 processors every day, as was the situation a year ago, than to have to wait more than a week and hope my 512-processor job then runs at least 12 hours. Runs fail either because I've increased the parameters too far or because a processor or node goes down during my run, which happens much more frequently now because of the larger number of processors per job.
The INCITE program should be stopped immediately. The program destroys scientific computing progress of general public for the sake of a few. Until the INCITE is stopped, the program should be managed more strictly. For example, the INCITE awardees should NOT be allowed to occupy SEABORG above a certain percentage (much more restriction of allowed time for an INCITE individual). Users who require a large number of parallel processors should be given priorities to SEABORG without the INCITE program. NERSC should have more computer resources available for users who do not require massively parallel computing.
We perform a large variety of computations with our codes. However, the simulations we need to perform most often 'only' scale well to 100 or so processors on seaborg. I appreciate that NERSC has been under pressure to show usage by 1000-processor jobs, and that has led to the queue structure preferences that we have seen over the past year. However, some large-scale computations run algorithms that need better communication to computation speeds in order to scale well. Devising scheduling policies that favor only the computations that run well on large parts of seaborg discriminates against other applications. Optimally, a supercomputing center should offer a mix of hardware so that a variety of computation needs can be met. Otherwise, seaborg should be partitioned to allow different types of computations without scheduling conflicts.
- Seaborg needs to be upgraded / computational requirements: 11 responses
-
Seaborg, though very well maintained, is getting old and slow. It would be great if you had a machine in the pipeline now, as Seaborg becomes less competitive with other machines. (I'm thinking in specific of SP4 machines like Datastar at SDSC/NPACI.)
The fact that seaborg's single-cpu performance is lagging well behind my desktop machines is making it seem much less attractive. I look forward to being able to run large, parallel-processing jobs on a machine with respectable single-cpu performance.
Seaborg is becoming obsolete. It would be great to upgrade to an SP4 fairly soon.
... SP3 is stable and reliable platform but it is becoming a thing of the past these days. Time to look for another large system?
Need the new computer system sooner. Need access to cluster and vector technology in order to keep stagnant models up to date.
... I typically run a number of 'small' (eg, 32 proc) jobs for multiple days, rather than large number of processors for a short time. While the code will scale efficiently to very large jobs, the computer time and human time is much better spent on smaller jobs, when developing new physics models and algorithms. The NERSC emphasis on massively parallel computing also means that the main code runs only on a limited subset of other machines that have PETSc, although in principle it would run well on many different systems that I have access to. The regular Wed downtimes on Seaborg are very inefficient from this point of view, when the batch queues are drained. Usually only half or less of the usual batch job productivity is possible during the entire maintenance week. The backfill queue is only marginally useful for most of my jobs. ...
The only thing I can say is, that the wallclock-times of my jobs are quite large as soon as I do my most complex computations. Sometimes I have to split one computation in two parts. So more powerful hardware would be the icing on the cake, but even so I'm greatly satisfied with the computational performance of SEABORG.
It's too bad that Alvarez [a linux cluster] is going down; that was a handy system to use for test runs.
My statement of 192 as the number of processors my code can use effectively refers to the 1 code I have been running on it in FY2004. Some codes, which only run effectively on smaller numbers of processors have been moved elsewhere. These include codes which run best on < 16 processors! I object to the penalties for 'small' jobs. Having run out of allocations half way through the current allocation period, clearly I feel that the current compute resources are inadequate. Of course, they would be more adequate if Seaborg were capable of delivering 50% of peak, which is what we used to get out of the old Cray C90. As it is 20% is more typical from a modest number of processors, and by the time we push this to 192 processors the number is more like 10-15%.
- GPFS and upgrade problems: 5 responses
-
I can't use more than 32 seaborg nodes for most models, if I try running with 64 or more I am getting I/O errors from gpfs. No idea why, the code runs fine on a Pwr4 with Colony or Federation switch. The code itself should easily scale up to 128 CPUs for many runs.
Is it now safe to run programs on NERSC's seaborg? Has the IBM bug issue been resolved? Please send us an email about the update.
Remote network seems to drop out without warning. I can press return and get a new line, but can't get anything to execute. Tried ^x^c, ^q escape, but still getting only the new line, nothing executes. Looked at MOTD at nersc home page, but nothing to indicate its gone down. Nothing works. Is anyone watching this?
The new OS has been a bit of a disaster for us. My production code now runs at half of the speed it had before. Also, since poe+ doesn't seem to be working I cannot document the exact speed decrease but I know it's slower since runs that used to take about 8-12 hours now take 2 days and must be split over two runs.
... Now with the degraded status post upgrade it is hard to know what the status will be like.
- Provide more interactive and debugging resources: 4 responses
-
... I do appreciate the relatively good interactive access to Seaborg, since it is crucial to code development. It would be nice to maintain some priority for interactive and debug jobs later in the day.
... Interactive performance continues to be suboptimal. See my comments at the end for greater detail on this problem. [A problem that I bring up every year is the quality of interactive service. Although this has improved since the last survey, the lack of ability to do small debugging runs interactively (at least with any reliability) is a problem. Would it not be possible to set aside a few nodes that could run with IP protocol (rather than US), in order to create a pool of processors where users could simultaneously run in parallel?]
Only using Seaborg at this point. My only real complaint is that the wait times for debug jobs with a large number of processors are rather inconsistent and hard to predict.
turn around time for small jobs ( ~ 4 nodes) is too long. Sometimes, the SP is dominated by large jobs and even debug jobs have to wait for sometime.
- Needs more disk: 3 responses
-
At the upper end of data storage demands, our jobs quickly fill up allocated scratch space. Even 512 Gbytes is too little and I heard scratch space is always a concern. Should more be added? ...
Default scratch space of 250GB is somewhat small for very large jobs that tend to write a lot of data to the disk. To deal with this I asked for temporary allocation of 1TB of scratch space.
Need Terabytes of scratch storage to work with results large simulations. ...
Comments on NERSC's PDSF Cluster: 10 responses
- Problems with login nodes / slow access: 5 responses
-
Sometimes pdsf login nodes are very slow.
1. I experience frequent (and rather frustrating) connectivity problems from LBL to PDSF (long latency).
2. PDSF's interactive nodes are almost always overburdened.My only complaint is that sometimes the system (PDSF) slows down. For example, a simple "ls" command will take seconds to execute, probably due to some jobs accessing some directory (such as /home) heavily. This can be really frustrating.
My office is in LBL, Building 50 and I log in to PDSF using FSecure SSH from my PC I am not sure where the problem lies (NERSC, LBL, Building 50 itself...) but on some mornings the network connection is INCREDIBLY slow. It is bad enough that PDSF becomes temporarily useless to me. It can take 5 minutes just to process the "exit" and "logout" commands. Things always improve during the day and by the end of the afternoon the network is so fast it is no longer a factor in how long it takes me to do anything.
There seem to be frequent network problems, on the order of once per week during the day. This can be frustrating.
NERSC response: The PDSF support team has made it possible to run interactively on the batch nodes (there is a FAQ that documents these procedures). They also recently purchased replacement login nodes that are being tested now and should go into production in December 2004. They are top of the line opterons with twice as much memory as the old nodes.
- Disk vault comments: 3 responses
-
The datavault disks are invaluable to me.
Since aztera died I am limited by the disk vault I/O resource bottleneck.
The cluster resources of PDSF have been useful and consistent in disk vaults and computer processes. However, the simultaneous load limit on the disk vaults limits the number of jobs we can run. ...
- Long job waits / need more cycles: 2 responses
-
Re: PDSF concurrency: Analysis of each event is independent, so in principle with millions of events I could use millions of processors. In practice for a typical analysis pass I submit from 20 to 200 jobs with each job taking a few hours at most to finish. Of course, competing with other users, some jobs wait in the queue for a day or two. ...
We submit single jobs (many) to batch queues via Globus Grid 2003 gatekeepers. Need more hardware resources in PDSF.
NERSC response: 64 3.6 GHz Xeons were added to the PDSF cluster in November 2004. This is 25% more CPU's, and they are almost twice as fast as the older CPU's.
Comments on NERSC's HPSS Storage System: 4 responses
- Needs better user interfaces:
-
... I harp on this annually. Cannot someone in the HPSS consortium spend the couple man-days required to put some standard shell interfaces to hsi? This would improve productivity immeasurably. ...
... Need to be able to transfer data between HPSS and LLNL storage directly - at the moment this is very tedious and error prone...
- Good capacity:
-
... The very large storage capacity of HPSS has been key to our work.
- Will use soon:
-
... Re: HPSS: I have not directly used HPSS myself yet. I will need to do so soon though. ...
Comments on Network Performance: 4 responses
-
Connection from HPSS to MIT is still rather slow and makes downloading/visualizing large runs a chore, often an overnight chore.
The NERSC connection to the WAN is fine but there is a gap in services for people doing distributed computing in that it is hard to get all the people lined up needed to diagnose application end-to-end performance. We have difficulties sometimes with our BNL-NERSC performance.
... Re: network: Generating graphics on a PDSF computer and displaying in Seattle is noticeably slower than generating the graphics on a local computer. An increase in the network speed would be nice.
Data transfer between NERSC and our institution is made at just a moderate rate, which is sometimes difficult to transfer a large data set for visualization. [Seaborg user at MIT]