![]() Set up an AWS cloud instance through your console: You can create an AWS instance (which remotely provides you with Nvidia GPUs) by following the steps in this fast.ai setup lesson. I recommend this option to those who are new to AWS or new to using the console. After this, GPU usage is 59 cents per hour. New users get 10 hours and 1 GB of storage for free. It is easily accessed through your browser. Use Crestle, through your browser: Crestle is a service (developed by fast.ai student Anurag Goel) that gives you an already set up cloud service with all the popular scientific and deep learning frameworks already pre-installed and configured to run on a GPU in the cloud. If your computer doesn’t have a GPU or has a non-Nvidia GPU, you have several great options: (I admittedly don’t have a background in hardware, but I think that data scientists like me should be part of the intended audience for this project.) If you don’t have a GPU… I just read the Overview, Getting Started, and Deep Learning pages of the ROCm website and still can’t explain what ROCm is in my own words, although I want to include it here for completeness. While I would love to see an open source alternative succeed, I have to admit that I find the documentation for ROCm hard to understand. The status of ROCm for major deep learning libraries such as PyTorch, TensorFlow, MxNet, and CNTK is still under development. This summer, AMD announced the release of a platform called ROCm to provide more support for deep learning. Nvidia dominates the market for GPUs, with the next closest competitor being the company AMD. When fast.ai recommends Nvidia GPUs, it is not out of any special affinity or loyalty to Nvidia on our part, but that this is by far the best option for deep learning. CUDA is a proprietary language created by Nvidia, so it can’t be used by GPUs from other companies. CUDA is by far the most developed, has the most extensive ecosystem, and is the most robustly supported by deep learning libraries. In almost all cases, this means having access to a GPU from the company Nvidia.ĬUDA and OpenCL are the two main ways for programming GPUs. However, to effectively use these libraries, you need access to the right type of GPU. Most deep learning practitioners are not programming GPUs directly we are using software libraries (such as PyTorch or TensorFlow) that handle this. Training a deep learning model without a GPU would be painfully slow in most cases. These advances in GPU technology are a key part of why neural networks are proving so much more powerful now than they did a few decades ago. Fortunately, these are exactly the type of computations needed for deep learning. In the last 20 years, the video gaming industry drove forward huge advances in GPUs (graphical processing units), used to do the matrix math needed for rendering graphics. ![]() ![]() The video game industry is larger (in terms of revenue) than the film and music industries combined. The hardware you need We are indebted to the gaming industry I want to answer some questions that I’m commonly asked: What kind of computer do I need to do deep learning? Why does fast.ai recommend Nvidia GPUs? What deep learning library do you recommend for beginners? How do you put deep learning into production? I think these questions all fall under a general theme of What do you need (in terms of hardware, software, background, and data) to do deep learning? This post is geared towards those new to the field and curious about getting started. This post has been translated into Chinese here. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |