源之原味

NVIDIA 研究推动 ICLR 深入学习

 

这篇文章来自 nvidia.com。原始 url 是: https://blogs.nvidia.com/blog/2018/04/25/nvidia-research-iclr-2/

以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.

NVIDIA researchers grabbed headlines last fall for generating photos of believable yet imaginary celebrity faces with deep learning. They’ll discuss how they did it on stage next week at theInternational Conference on Learning Representations, better known as ICLR.

That research team is one of five from NVIDIA Research sharing their work to advance deep learning at ICLR, April 30-May 3 in Vancouver. Our 200-person strong NVIDIA Research team, which works from 11 worldwide locations, focuses on pushing the boundaries of technology in machine learning, computer vision, self-driving cars, robotics, graphics and other areas.

Also at ICLR: The NVIDIA 深层学习学院 will offer free online training — and the chance to win an NVIDIA 泰坦 V. (More information below.) And, if you missed our GPU 技术会议, you can see our latest innovations at our booth.

ICLR, in its sixth year, is focused on the latest deep learning techniques. NVIDIA is a sponsor of the conference.

More Than a Pretty Face

In the face-generating research, a team at our Finland lab developed a method of training 生成对抗性网络 (GANs) that produced better results than existing techniques. The researchers demonstrated their success by applying it to the difficult problem of generating realistic-looking human faces.

“Human looks have been somewhat sacred. It’s extremely difficult to create believable-looking digital characters in, say, movies without using real-life actors as reference,” said Tero Karras, lead author on the ICLR paper. “With deep learning and this paper, we’re getting closer.”

The neural network generates new human faces by mixing characteristics like gender, hairstyle, and face shape in different ways. The video below shows the result of varying these characteristics at random, demonstrating an endless number of possible combinations.

The research paves the way for game developers to more quickly and easily create digital people who look like the real thing, Karras said. He’s also heard from a team that’s looking to apply the research to help people with prosopagnosia, or face blindness, a neural disorder in which sufferers can’t recognize faces.

The researchers will discuss the paper, “Progressive Growing of GANs for Improved Quality, Stability, and Variation,” on Monday morning at ICLR, explaining how they achieved such good results and the reasoning behind their techniques.

Poster Sessions

Check out our poster sessions at ICLR:

  • "Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training” – Training a neural network across many machines is an efficient way to train deep and larger models, but it requires an expensive high-bandwidth communications network. This research makes it possible train models faster across more GPUs, but on an inexpensive commodity Ethernet network.
  • "Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip” – Engineers combined a method for more efficiently running recurrent neural networks on GPUs with a technique known as model pruning, which reduces the complexity of a neural network. The result dramatically speeds up models and makes it possible to deploy far larger neural networks on each GPU.
  • "Efficient Sparse-Winograd Convolutional Neural Networks” – Convolutional neural networks are computationally intensive, which makes it difficult to deploy them on mobile devices. Researchers made changes to what’s known as the Winograd-based convolution algorithm — a method used to reduce the number of computations needed to process CNNs — so that it would work with network pruning, a way to reduce parameters and increase speed in a neural network. In addition to shrinking computational workloads, combining the two methods allowed researchers to prune networks more aggressively.
  • "Mixed Precision Training” – Mixed precision training takes advantage of NVIDIA Volta TensorCores to speed up training and reduce memory requirements, which enables training larger models. Techniques described in the paper lead to model accuracy that matches single-precision results, without changing any hyper-parameters.

Get Free Training, Win a TITAN V

NVIDIA 深层学习学院 (DLI) will offer free online training exclusively to ICLR attendees — and the chance to win an NVIDIA TITAN V. Stop by the NVIDIA booth to pick up a token card for free access. The first 200 attendees to take an online course will receive $100 in online training credits. See contest rules. And swing by on Tuesday and Wednesday between 4-5 p.m. to meet our DLI University Ambassadors.

Also at our booth: get hands on with our some of our latest technology, talk with deep learning experts or meet our hiring team.

Leave A Reply

Your email address will not be published.