Research Intern - Cloud Competitive Intelligence
Microsoft
Research Intern - Cloud Competitive Intelligence
Mountain View, California, United States
Save
Overview
Research Internships at Microsoft provide a dynamic environment for research careers with a network of world-class research labs led by globally-recognized scientists and engineers, who pursue innovation in a range of scientific and technical disciplines to help solve complex challenges in diverse fields, including computing, healthcare, economics, and the environment.
The Strategic Planning and Architecture (SPARC) group conducts cloud competitive landscape and technology analysis to help us understand potential directions that cloud completion is taking and assess Azure's competitive gaps.
As a Research Intern in the SPARC group your work will involve reading technical papers, blogs, market and competitive analysis reports and building analytical frameworks.
Qualifications
Required Qualifications
- Currently enrolled in a PhD program in engineering, mathematics, physics, applied sciences or similar STEM field.
Other Requirements
- Research Interns are expected to be physically located in their manager’s Microsoft worksite location for the duration of their internship.
- In addition to the qualifications below, you’ll need to submit a minimum of two reference letters for this position as well as a cover letter and any relevant work or research samples. After you submit your application, a request for letters may be sent to your list of references on your behalf. Note that reference letters cannot be requested until after you have submitted your application, and furthermore, that they might not be automatically requested for all candidates. You may wish to alert your letter writers in advance, so they will be ready to submit your letter.
Preferred Qualifications
- Experience in building complex analytical models in Microsoft Excel.
- Demonstrated ability to develop original research agendas.
- Knowledge of server hardware performance and TCO modeling and analysis, e.g.: CPU, GPU, Memory, and other components.
- Knowledge of Cloud and AI/ML hardware and software ecosystems.
- Knowledge of Microsoft Office Applications, including advanced knowledge of Excel along with a solid analytical capabilities and experience with statistical tools (Regression analysis, advanced Excel / VBA, Power BI is a plus).
- Ability to think unconventionally to derive creative and innovative solutions.
- Solid process/systems background and a proven ability to rapidly understand, use and drive improvements in processes/systems.
- Proficient business judgment & communication/presentation skills.
The base pay range for this internship is USD $6,550 - $12,880 per month. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $8,480 - $13,920 per month.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-intern-pay
Microsoft accepts applications and processes offers for these roles on an ongoing basis.
Responsibilities
Research Interns put inquiry and theory into practice. Alongside fellow doctoral candidates and some of the world’s best researchers, Research Interns learn, collaborate, and network for life. Research Interns not only advance their own careers, but they also contribute to exciting research and development strides. During the 12-week internship, Research Interns are paired with mentors and expected to collaborate with other Research Interns and researchers, present findings, and contribute to the vibrant life of the community. Research internships are available in all areas of research, and are offered year-round, though they typically begin in the summer.
Additional Responsibilities
- Evaluate AI/ML large language models and help build an understanding of the relationship between model sizes and the size of GPU/accelerator clusters that are needed to train and infer these models.
- Help build a view of the long-term growth of AI large language models and how they would deploy on future GPU accelerator clusters that are built by AI vendors and large hyperscalers.
- Develop performance, cost analysis, and modelling perspectives for future AI GPUs.