#101128 Computational & Data Science Research Specialist 4
As an Organized Research Unit of UC San Diego, the San Diego Supercomputer Center (SDSC) is considered a leader in data-intensive computing and cyberinfrastructure, providing resources, services, and expertise to the national research community, including industry and academia. Cyberinfrastructure refers to an accessible, integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. SDSC launched Comet, a petascale supercomputer that joins the Center’s data-intensive Gordon cluster. SDSC is a partner in XSEDE (eXtreme Science and Engineering Discovery Environment), the most advanced collection of integrated digital resources and services in the world.
The S3 group within SDSC will engage in 3 primary activities: i) examining SDSC software creation projects for business models for continued sustainability, ii) operating the HUBzero infrastructure which involves the creation and ongoing operation and development of the HUBzero software platform, and iii) leading the Science Gateways Community Institute which involves coordinating the efforts of 45 personnel around the United States as members of a distributed virtual organization funded by NSF that provide consulting services to the science gateway community.
The incumbent will apply advanced computational, computer science, data science, and CI software research and development principles, with relevant domain science knowledge where applicable, to perform highly complex research, technology and software development which involve in-depth evaluation of variable factors impacting medium to large projects of broad scope and complexities. S/he will design, develop, and optimize components / tools for major HPC / data science / CI projects. The incumbent will also resolve complex research and technology development and integration issues, give technical presentations to associated research and technology groups and management, and evaluate new hardware and software technologies for advancing complex HPC, data science, CI projects. The incumbent may represent the organization as part of a team at national and international meetings, conferences and committees. S/he will assist in the design, implementation and recommends new hardware and software technologies for advancing complex HPC, data science, CI projects, and may lead a team of research and technical staff.
The incumbent will researche novel cyberinfrastructure approaches to scientific simulation workflow, job submission, tracking, and analysis. S/he will continuously examine cyberinfrastructure efforts of other universities for technologies that may be leveraged and for gap analysis. The incumbent will also develop novel research programs in high throughput computing, with a particular focus on volunteer computing and storage systems, help create research simulation tools in conjunction with leading domain researchers to be installed on major gateways with significant user bases, and interact with Science Gateway software tool developers to install their simulation tools on gateway for submission to one or more local or national compute infrastructures.
Additionally, s/he will monitor system performance and proactively improve interactions with HPC and HTC systems to perform with ever increasing scale of users and compute loads. S/he will work with software tool developers to take advantage of unique hardware and software systems such as GPU's and container systems such as Singularity and Docker and advise software projects in S3 and the Science Gateway Community Institute on how best to exploit HPC and HTC resources.
For more information, please visit www.sdsc.edu.QUALIFICATIONS
Bachelor's degree in Computer / Computational / Data Science, or Domain Sciences with computer / computational / data specialization or equivalent experience. Master's degree preferred.
Advanced knowledge of HPC / data science / CI. Extensive experience working with domain scientists to develop their approaches to HPC deployments of their codes. Understanding of various HPC resources and their appropriateness for a variety of scientific computational tasks.
Highly advanced skills, and demonstrated experience associated with one or more of the following: HPC hardware and software power and performance analysis and research, design, modification, implementation and deployment of HPC or data science or CI applications and tools of large-scale scope.
Extensive experience with national resources such as XSEDE, Open Science Grid. Strong experience deploying on combined CPU / GPU systems. Significant and broad knowledge of a multitude of job submission and scheduling systems.
Expert ability to assemble heterogeneous computational infrastructure components to achieve necessary results including GPU / CPU strategies that mix rendering with interactive computation. Expert level understanding of containerization strategies and their impact on resource sharing, including multi-session sharing of GPU compute and rendering services.
In depth skills and experience in independently resolving complex computing / data / CI problems using introductory and / or intermediate principles. Skills to perform in-depth troubleshooting of issues relating to scale when user demands or the demands of one or more simulation codes significantly increase.
In depth experience assessing a broad spectrum of technical and research needs and demands and establish priorities, delegate and / or lead development of solutions to meet such needs. In depth understanding of how emerging technologies can fulfill such research needs, such as Jupyter notebooks, uncertainty quantification, and containerization.
- Job offer is contingent upon satisfactory clearance based on Background Check results.