Object detection is useful for understanding what’s in an image, describing both what is in an image and where those objects are found. In general, there are two different approaches for this task –
Managing multiple research experiments at a time can be overwhelming. The same applies to deep learning research as well. Beyond the usual challenges in software development, machine learning developers face new challenges - experiment management (tracking which parameters, code, and data went into a result) and reproducibility (running the same code and environment later)!
Lately, a lot of my friends have been asking about my deep learning workstation setup. In this post I am going to describe my hardware, OS, and different packages that I use. In particular, based on the question, I found that the most of the interest have been around managing different python versions, and modules like pytorch/tensorflow libraries etc.
Modeling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent variables) is commonly referred as a regression problem. The simplest model of such a relationship can be described by a linear function - referred as linear regression.
There are two types of data visualizations: exploratory and explanatory. Explanatory analysis is what happens when you have something specific you want to show an audience. The aim of explanatory visualizations is to tell stories - they’re carefully constructed to surface key findings.
In the previous post, we learned about tree based learning methods - basics of tree based models and the use of bagging to reduce variance. We also looked at one of the most famous learning algorithms based on the idea of bagging- random forests.
Tree based learning algorithms are quite common in data science competitions. These algorithms empower predictive models with high accuracy, stability and ease of interpretation. Unlike linear models, they map non-linear relationships quite well. Common examples of tree based models are: decision trees, random forest, and boosted trees.
In the previous post on Support Vector Machines (SVM), we looked at the mathematical details of the algorithm. In this post, I will be discussing the practical implementations of SVM for classification as well as regression.
In this post we will explore a class of machine learning methods called Support Vector Machines also known commonly as SVM.