AI Infrastructure Experts Offer Best Practices and Insights for Implementers Attending GTC 2018
这篇文章来自 nvidia.com。原始 url 是: https://blogs.nvidia.com/blog/2018/03/06/dgx-at-gtc-2018/
以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.
These endeavors could mean the difference between surviving and thriving in a turbulent market, finding the next wonder-drug, or defending against the next generation of cyber-based threats.
Underpinning these existential challenges and opportunities is a common goal: building an enterprise-grade AI infrastructure that offers disruptive levels of never-before-seen performance.
This year at GTC, AI experts have the opportunity to tap insights that will further their teams’ imperatives like never before. Some common themes you’ll find will include:
1) New innovations that break the speed of scale barrier
As developers try to tackle increasingly more complex neural net models, and embrace model parallelism on a larger scale, implementers will be looking for more efficient ways to not only achieve scale, but speed of scale, with greater ease and less architectural complexity.
- Attend the session “Breaking the Barriers to AI-Scale in the Enterprise” to learn more and get the latest from Charlie Boyle, senior director of the NVIDIA DGX product team.
2) From fast prototyping to production AI: the workflow impact of GPU workstations
GPU workstations are having a democratizing effect on deep learning workflows, enabling easier experimentation at the desk with frameworks, models and datasets vs. wrestling with IT to get time on a server or renting time in the cloud and worrying about cost-per-training-run.
- Attend the session “The Journey from a Small Development Lab Environment to a Production Datacenter for Deep Learning Applications” to learn best practices from Markus Weber, senior product manager, and Ryan Olson, solutions architect, both of the NVIDIA DGX product team.
3) Simplifying AI infrastructure in the data center
DGX simplifies deep learning deployment by systemizing the solution stack inclusive of everything from industry-leading GPUs to performance-optimized frameworks. As teams scale out their environments, important considerations around storage and networking arise.
- Attend the session “High-Performance Input Pipelines for Scalable Deep Learning” to learn from Brian Gold, R&D director at Pure Storage, on how designing your AI infrastructure with the optimal combination of GPU computing and high-performance storage can accelerate time to solution and ensure predictable performance as your environment scales.
4) Implementer perspectives offer a quicker path to AI success
The surest way to shorten deployment timeframes and increase deep learning return on investment is to hear the pitfalls and best practices of those whose business depends on their AI infrastructure. This panel brings together implementers from a variety of industries, sharing their experiences.
- Attend the session “Deep Learning Implementers Panel: Field Insights for Accelerating Deep Learning Performance, Productivity and Scale” with guest panelists Brian Gamido of DeepGram, Neil Tenenholtz of Mass General, Arun Subramaniyan of BHGE, and myself moderating to learn how to set up your deep learning project for success.