Response Summary
Many thanks to the 300 users who responded to this year's User Survey -- this represents the highest response level in the five years we have conducted the survey. The respondents represent all five DOE Science Offices and a variety of home institutions: see User Information.
You can see the FY 2002 User Survey text, in which users rated us on a 7-point satisfaction scale. Some areas were also rated on a 3-point importance scale.
|
|
The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey results are listed below.
Every year we institute changes based on the survey; this past year's efforts include:
- With the NERSC User Group we established a queue committee whose task was to investigate queue issues and recommend improvements. This year's rating for SP: queue structure went up by 0.7 points. Based on the committee's recommendations NERSC did the following:
- Improved debug and interactive turnaround during prime time by setting aside 5% of the SP compute pool for interactive and debug jobs from 5:00 AM to 6:00 PM Pacific Time Monday to Friday. This year's rating for SP: ability to run interactively went up by 0.8 points.
- Implemented priority aging for regular class jobs: jobs in the regular class for more than 36 hours will not be preempted by new premium jobs.
- Provided a new regular_long class with a connect time limit of 24 hours for jobs using 32 nodes or less. Such jobs are not drained for system outages so self-checkpointing is very important for regular_long jobs.
- NERSC provided more performance analysis tools on the SP along with documentation and training on how to use them. See Programming Tools. This year's rating for SP: performance and debugging tools went up by .8 points.
- NERSC installed new visualization tools on the Vis Server, Escher, as well as on Seaborg, and streamlined visualization documentation. See Visualization Packages. This year's rating for Visualization Services went up by .3 points.
- NERSC wrote a number of scripts to improve SP management procedures. This year's rating for SP: uptime went up by 1 point, the largest increase in satisfaction of the whole survey.
- NERSC started to conduct monthly training sessions on the internet using Access Grid Node technology. This technology is still not completely mature and there have been a few rough spots along the way. Satisfaction with training remains at the same level as last year and we will work to improve our training program in the upcoming year.
The average satisfaction scores from this year's survey ranged from a high of 6.6 to a low of 4.8. Areas with the highest user satisfaction were:
- SP: uptime
- Consulting: timely response
- HPSS: reliability
- PDSF: uptime
Areas with the lowest user satisfaction were:
- PVP: batch wait time
- Visualization services
- Training
The largest increases in satisfaction came from the SP: 9 of the 18 ratings that were significantly higher this year than last year were SP ratings. Other areas showing significant improvements were the T3E (queue structure, tools and utilities, uptime), visualization services, hardware and software configuration, and the New Users Guide.
Only two areas were rated significantly lower this year: PVP performance and debugging tools, and the allocations process.
92 users answered the question What does NERSC do well? 71 respondents pointed out that NERSC is a well run center with good hardware. 42 singled out User Support and NERSC's staff, 16 NERSC's documentation and 13 job scheduling and batch throughput. Some representative comments are:
Among the supercomputing facilities I tried until now, NERSC excels in most aspects. I am most satisfied with the overall stability of the system. This must come from the outstanding competence of the technicians.I really appreciate the job from consult. They always did their best to help me to resolve my technique problems, especially at starting to use seaborg.
The available hardware and software is very good. It meets my needs well. There is an abundance of documentation I have benefited from. Account support has also been very good. I also appreciate the seeming concern about security.
66 users responded to What should NERSC do differently? The following issues were raised and will be addressed in the upcoming year:
- SP scheduling:
- Could more resources be devoted to the regular_long class (more nodes, a longer run time, better throughput)?
- Could longer run time limits be implemented across the board?
- Could more services be devoted to interactive jobs?
- Could there be a serial queue?
- SP software:
- Could the Unix environment be more user-friendly (e.g. more editors and shells in the default path)?
- Could there be more data analysis software, including matlab?
- Computing resources:
- NERSC needs more computational power overall
- Could a PVP resource be provided?
- Could mid-range computing or cluster resources be provided?
- Documentation:
- Provide better searching, navigation, organization of the information.
- Enhance SP documentation.
- Training:
- Provide more training on performance analysis, optimization and debugging.
- Provide more information in the New Users Guide.
- User Information
- Overall Satisfaction and Importance
- All Satisfaction Questions and Changes from Previous Years
- Visualization and Grid Computing
- Web, NIM, and Communications
- Hardware Resources
- Software Resources
- Training
- User Services
- Comments about NERSC