The Supervised Machine Learning Bootcamp
- Descrição
- Currículo
- FAQ
- Revisões
Do you want to master supervised machine learning and land a job as a machine learning engineer or data scientist?
This Supervised Machine Learning course is designed to equip you with the essential tools to tackle real-world challenges. You’ll dive into powerful algorithms like Naïve Bayes, KNNs, Support Vector Machines, Decision Trees, Random Forests, and Ridge and Lasso Regression—skills every top-tier data professional needs.
By the end of this course, you’ll not only understand the theory behind these six algorithms, but also gain hands-on experience through practical case studies using Python’s sci-kit learn library. Whether you’re looking to break into the industry or level up your expertise, this course gives you the knowledge and confidence to stand out in the field.
First, we cover naïve Bayes – a powerful technique based on Bayesian statistics. Its strong point is that it’s great at performing tasks in real-time. Some of the most common use cases are filtering spam e-mails, flagging inappropriate comments on social media, or performing sentiment analysis. In the course, we have a practical example of how exactly that works, so stay tuned!
Next up is K-nearest-neighbors – one of the most widely used machine learning algorithms. Why is that? Because of its simplicity when using distance-based metrics to make accurate predictions.
We’ll follow up with decision tree algorithms, which will serve as the basis for our next topic – namely random forests. They are powerful ensemble learners, capable of harnessing the power of multiple decision trees to make accurate predictions.
After that, we’ll meet Support Vector Machines – classification and regression models, capable of utilizing different kernels to solve a wide variety of problems. In the practical part of this section, we’ll build a model for classifying mushrooms as either poisonous or edible. Exciting!
Finally, you’ll learn about Ridge and Lasso Regression – they are regularization algorithms that improve the linear regression mechanism by limiting the power of individual features and preventing overfitting. We’ll go over the differences and similarities, as well as the pros and cons of both regression techniques.
Each section of this course is organized in a uniform way for an optimal learning experience:
– We start with the fundamental theory for each algorithm. To enhance your understanding of the topic, we’ll walk you through a theoretical case, as well as introduce mathematical formulas behind the algorithm.
– Then, we move on to building a model in order to solve a practical problem with it. This is done using Python’s famous sklearn library.
– We analyze the performance of our models with the aid of metrics such as accuracy, precision, recall, and the F1 score.
– We also study various techniques such as grid search and cross-validation to improve the model’s performance.
To top it all off, we have a range of complementary exercises and quizzes, so that you can enhance your skill set. Not only that, but we also offer comprehensive course materials to guide you through the course, which you can consult at any time.
The lessons have been created in 365’s unique teaching style many of you are familiar with. We aim to deliver complex topics in an easy-to-understand way, focusing on practical application and visual learning.
With the power of animations, quiz questions, exercises, and well-crafted course notes, the Supervised Machine Learning course will fulfill all your learning needs.
If you want to take your data science skills to the next level and add in-demand tools to your resume, this course is the perfect choice for you.
Click ‘Buy this course’ to continue your data science journey today!
-
6MotivationVídeo Aula
-
7Bayes' Thought ExperimentVídeo Aula
-
8Bayes' Thought ExperimentQuestionário
-
9Bayes' Thought Experiment: AssignmentTexto
-
10Bayes' TheoremVídeo Aula
-
11Bayes' TheoremQuestionário
-
12The Ham-or-Spam ExampleVídeo Aula
-
13The Ham-or-Spam ExampleQuestionário
-
14The Ham-or-Spam Example: AssignmentTexto
-
15The YouTube Dataset: Creating the Data FrameVídeo Aula
-
16CountVectorizerVídeo Aula
-
17The YouTube Dataset: PreprocessingVídeo Aula
-
18The YouTube Dataset: Preprocessing: AssignmentTexto
-
19The YouTube Dataset: ClassificationVídeo Aula
-
20The YouTube Dataset: Classification: AssignmentTexto
-
21The YouTube Dataset: Confusion MatrixVídeo Aula
-
22The YouTube Dataset: Accuracy, Precision, Recall, and the F1 scoreVídeo Aula
-
23The YouTube Dataset: Changing the PriorsVídeo Aula
-
24Naïve Bayes: AssignmentTexto
-
25MotivationVídeo Aula
-
26MotivationQuestionário
-
27Math Prerequisites: Distance MetricsVídeo Aula
-
28Math Prerequisites: Distance MetricsQuestionário
-
29Random Dataset: Generating the DatasetVídeo Aula
-
30Random Dataset: Visualizing the DatasetVídeo Aula
-
31Random Dataset: ClassificationVídeo Aula
-
32Random Dataset: How to Break a TieVídeo Aula
-
33Random Dataset: Decision RegionsVídeo Aula
-
34Random Dataset: Choosing the Best K-valueVídeo Aula
-
35Random Dataset: Grid SearchVídeo Aula
-
36Random Dataset: Model PerformanceVídeo Aula
-
37KNeighbors Classifier: AssignmentTexto
-
38Theory with a Practical ExampleVídeo Aula
-
39KNN vs Linear Regression: A Linear ProblemVídeo Aula
-
40KNN vs Linear Regression: A Non-linear ProblemVídeo Aula
-
41KNeighbors Regressor: AssignmentTexto
-
42Pros and ConsVídeo Aula
-
43What is a Tree in Computer Science?Vídeo Aula
-
44The Concept of Decision TreesVídeo Aula
-
45Decision Trees in Machine LearningVídeo Aula
-
46Decision Trees: Pros and ConsVídeo Aula
-
47Practical Example: The Iris DatasetVídeo Aula
-
48Practical Example: Creating a Decision TreeVídeo Aula
-
49Practical Example: Plotting the TreeVídeo Aula
-
50Decision Tree Metrics Intuition: Gini InpurityVídeo Aula
-
51Decision Tree Metrics: Information GainVídeo Aula
-
52Tree Pruning: Dealing with OverfittingVídeo Aula
-
53Random Forest as Ensemble LearningVídeo Aula
-
54BootstrappingVídeo Aula
-
55From Bootstrapping to Random ForestsVídeo Aula
-
56Random Forest in Code - Glass DatasetVídeo Aula
-
57Census Data and Income - PreprocessingVídeo Aula
-
58Training the Decision TreeVídeo Aula
-
59Training the Random ForestVídeo Aula
-
60Introduction to Support Vector MachinesVídeo Aula
-
61Intro to SVMsQuestionário
-
62Linearly separable classes - hard margin problemVídeo Aula
-
63Non-linearly separable classes - soft margin problemVídeo Aula
-
64Soft margin problemQuestionário
-
65Kernels - IntuitionVídeo Aula
-
66KernelsQuestionário
-
67Intro to the practical caseVídeo Aula
-
68Preprocessing the dataVídeo Aula
-
69Splitting the data into train and test and rescalingVídeo Aula
-
70Implementing a linear SVMVídeo Aula
-
71Implementing a linear SVMQuestionário
-
72Analyzing the results– Confusion Matrix, Precision, and RecallVídeo Aula
-
73Cross-validationVídeo Aula
-
74Choosing the kernels and C values for cross-validationVídeo Aula
-
75Hyperparameter tuning using GridSearchCVVídeo Aula
-
76Support Vector Machines - AssignmentTexto
