👥 1 conference
🎤 2 talks
📅 Years active: 2019 to 2019
No biography available.
1 known conferences
an introduction into deep learning by training a CNN model to classify cats and dogs images.
Deep learning has been on a hype peak for the last 2-5 years, and it seems to be here to stay. As a developer, you might be interested in getting in touch with deep learning to see what’s the hype all about, and if you’re not, you should! Deep learning could be a new way of looking at problems and developing innovative ways of solving them.
The goal of the workshop is to give participants first experience in training and using a CNN to classify images. For this purpose, we will be using the most prominent frameworks (Keras & Tensorflow) and take a glimpse of the most popular machine learning community (Kaggle). The workshop is planned to be in Python, which is a popular programming language for deep learning. Nevertheless, you don't need to have Python knowledge to participate!
Participants will go through the process of setting up and coding a simple app to train a CNN on the task of classifying cats vs dogs. while training, the training/validation metrics will be observed.
After the training is completed and the metrics are observed, participants will set up and code an "inference engine" that uses these trained models to classify new cat and dog images.
Participants should bring their own laptops and prepare them with as many of the following steps as possible:
Haitham, Michael, and Jan will be organising and guiding the workshop. They will gladly help you get a better understanding of deep learning and assist you when needed.
This talk will be in three parts, first, we will present the most common deep learning frameworks, their advantages and disadvantages, and what languages they support. Then we will talk about cloud providers that support deep learning. In the last section, we will talk about our experience with deep learning on the cloud.
For many frameworks, Python seems to be the preferred language, largely due to its ease of use and extensive community support. However, most frameworks also support other languages like C++, R, and Java. For the most part, a combination of preferred language and the intended use case can determine the options for a deep learning framework. Currently, there are many deep learning frameworks, the most common ones are Tensorflow, Keras, Pytorch, Caffe, and Theano. Some of them excel in image classification challenges while others are more suitable for natural language processing or sentiment analysis. Your choice for the most suitable framework should depend on many factors, including the intended application, preferred language of use, and the availability of good documentation and community support. Usually, frameworks default to running on CPUs and need an extra setup to run on GPUs, but these steps are such a trivial task that it's not an important factor to consider. GPUs have a huge advantage in training and inference speed over CPUs due to their parallelization capabilities.
These days there are countless cloud providers for computing power, and many of them have responded to the increase of interest in this category of AI by providing VMs specially tailored for deep learning. All the big players (Google Cloud, AWS, Azure) provide GPU enabled machines with deep learning frameworks pre-configured. However, there are smaller cloud providers who offer interesting models in order to survive the competition. When deciding on which providers to go with, one should consider many factors including pricing model, actual computing power delivered, availability regions, and the existence of pre-configured VMs for deep learning. For example, AWS has a good pricing model and great choices of deep learning frameworks, but for a while, it was impossible to provision a new machine as all of the resources were already booked. It’s also worth mentioning that some cloud providers offer trial periods or sign up credit to test their services, so always be on a look for that!
At Viaboxx we are interested in trying out new technologies, so we started with deep learning by creating and running a deep learning showcase on a cloud provider. We used Keras with Tensorflow backend for the source code and Kaggle as a source of data. For a VM to perform the training we compared different cloud providers and chose a smaller one, Paperspace. Although the experience was not hiccups-free, we ended up with a good understanding of the technologies and stacks involved.