Course Review: PyTorch for Deep Learning with Python Bootcamp | Udemy

Cheng-Yu Huang
6 min readMar 22, 2023
A cool image from googling “PyTorch”- I believe no one will sue me for this?

During the last month, I completed the “PyTorch for Deep Learning with Python Bootcamp” online course by Jose Portilla on Udemy. In this article, I would like to explain why I chose to enrol in this course, how I approached the course and my overall evaluation of it.

Why PyTorch

As a physics graduate who now works in biological sciences and has experience with microscopy and image processing, people often assume I know a lot about AI. During the PhD interview process last year, I was repeatedly asked if I knew how to use PyTorch or TensorFlow.

However, I didn’t intentionally learn about these platforms until the end of last year when I was preparing a workshop on bioimage analysis. I needed to review some basics of ML for image analysis and explain them in plain language. It was during this process that I realized ML is very interesting, useful, and not difficult to learn given my background knowledge in mathematics and physics.

I decided to learn ML through Python, which I use the most in my work. There are many new ML/DL-based bioimage processing software that are Python-based, and I had two options to choose from: PyTorch or TensorFlow. I chose PyTorch because during the PhD-seeking process, more people asked me about it than TensorFlow.

Later on, I discovered that PyTorch and TensorFlow are essentially the same and can do similar things, like Windows and iOS. In the field, TensorFlow is seen as somewhat outdated, and most people are moving towards PyTorch.

Why this course

There are numerous online courses available on PyTorch or ML/DL in general. At that time, I was aware of two online course platforms, Coursera and Udemy. I had previously used both platforms, taking a few courses on Algorithm Design on Coursera and a course on High-Performance Computing on Udemy. While both courses were well-structured, I preferred Udemy as the platform allowed me to pay for a course and gain unlimited access to it. In contrast, on Coursera, access to the course was limited by time (though later realized that it could be free if you don’t care about the certification), and if you didn’t finish it within a specific time frame, you needed to pay more to access it.

I didn’t spend much time deciding which course to take. I simply searched for PyTorch on Udemy and selected the first course on the list, which was typically the highest-rated and had the most students.

What is covered and How I went through it

The course was divided into 11 sections. Sections 1–5 cover the basics of Python, Numpy, Pandas, and PyTorch arrays. Since I already had a basic understanding of Python, I didn’t spend much time on these sections. However, I did go through the PyTorch basics in section 5, but in hindsight, I found them to be too trivial and not worth my time.

From sections 6 to 11, the course covered various DL topics, including ML basics (6), ANN (7), CNN (8), RNN (9), GPU with PyTorch (10), and NLP (11). Each section included around five Jupyter notebooks as case studies, covering relevant topics. The lectures in each section were split into two parts: concepts (theories) and “Code Along” lectures, where the instructor, Jose, coded and went through the Jupyter notebooks provided line by line, explaining what they did.

I went through these sections in two rounds. In round one, I focused on all the concept lectures and one or two lectures covering the first Jupyter Notebook examples. I coded line by line for that notebook for better understanding. For example, in section 8 on CNN, there were Jupyter Notebooks on CNN with MNIST data for basic CNN, CIFAR data for coloured image CNN, and two more on Real/Custom Image data. In round one, I only went through the one with MNIST data, as the rest were only minor modifications to the first one.

In round two, I watched all of the lectures that I had skipped previously, and most of the time, I did not bother to code along. I chose not to do any of the provided exercises (there was one for each section) since I believe they are intended for when I want to use PyTorch in my project. At this point, I merely want to know what Pytorch is capable of and the basic syntax.

As I progressed through the course, I kept a study log, which I now do for every project I undertake. If you don’t have a method to track your progress yet, I strongly recommend that you begin with what I did. I created a notebook for this course on Notion, which contained one large table. Each row in the table represents one of the study sessions I conducted, and there are four columns:

  1. Date
  2. Time: the number of hours spent studying. Most of my study sessions lasted 1.5 hours.
  3. Log: the video or notebook I reviewed
  4. Comments: where I jot down a few words about what I learned and how I feel about the course. I record notes such as “LSTM is so fascinating, read more” or “I am getting a little bored with it” here.

Overall, I spent 23.5 hours on the course, but others may spend more time as I did not cover the entire course.

My study log

Evaluation

In general, I found this course to be a great introduction to PyTorch and some fundamental deep-learning concepts. By the end of the course, I was able to comprehend some parts of the PyTorch code written by others, which was very helpful. Furthermore, knowing terminologies like recurrent neural networks, cross-entropy loss functions, and the basics of natural language processing was also quite beneficial. For example, I now have an easier time reading deep learning image processing papers, and I can understand how RNNs could be used to make weather predictions. Additionally, as a biophysicist, learning about the LSTM unit made me think about its biological relevance in terms of memory processing.

Furthermore, I now feel more confident in discussing deep learning with others in the research field, at least I feel less intimidated when they throw about some geeky terminologies. I also know how to ask questions and where to find resources if I want to code my own deep-learning tools. Although there are other ways to achieve this, I found this course to be quite helpful.

However, quite a few topics were not explained well, especially in the later sections, such as the role of the dropout layer and the mechanism of the LSTM unit. While these topics were mentioned, I had to look elsewhere for a more detailed explanation. It might be the case that there are just too many topics to be covered and most of the time, some methods are practically useful and easy to implement but conceptually difficult to explain.

Besides, In the earlier sections I coded along but in later chapters, I coded less as I found that the amount of knowledge being learnt became less and less per line of code being understood. Despite this, I still skim through them, as they may be useful in the future. In order to solidify my background knowledge in these topics, I might need to take Andrew Ng’s Coursera course on machine learning/deep learning.

Overall, I found this course to be informative and helpful. While I do not have any other PyTorch courses to compare it with, if you are looking for a good place to start, you will definitely learn a lot from it.

--

--

Cheng-Yu Huang

PhD student @ University of Cambridge, a Taiwanese-Japanese Biophysicist with teenage years stayed in the UK. Reading, writing and singing when not sciencing😉