Machine learning terminology

Erika Oliver

Erika Oliver

· 4 min read
Machine learning terms

Machine Learning Fundamentals

Machine learning is a subset of artificial intelligence (AI) that empowers systems to learn from data and make decisions without explicit programming. The fundamental terms in this category include:

  • Data: The lifeblood of machine learning, data refers to the information used to train models. It can be structured or unstructured and is crucial for model accuracy.
  • Algorithm: An algorithm is a step-by-step set of instructions designed to perform a specific task. In machine learning, algorithms are the engines that drive model training and predictions.
  • Model: A model is the output generated by a machine learning algorithm after being trained on data. It encapsulates patterns and relationships within the data.
  • Hyperparameters: Hyperparameters are parameters used to set up and control the learning process of an ML model.
  • Overfitting: Overfitting is a concept in data science, which occurs when a statistical model fits exactly against its training data.
  • Underfitting: Underfitting is a scenario in data science where a data model is unable to capture the relationship between the input and output variables accurately
  • Training and Testing: Training involves exposing a model to labeled data to enable learning. Testing assesses a model's performance on new, unseen data to evaluate its generalization capabilities.

Types of Machine Learning

Machine learning encompasses various approaches, each with its unique terminology:

  • Supervised Learning: In supervised learning, models learn from labeled data, making predictions based on input-output pairs.
  • Unsupervised Learning: Unsupervised learning involves training models on unlabeled data, allowing them to discover patterns independently.
  • Reinforcement Learning: Reinforcement learning focuses on training models to make sequences of decisions by rewarding positive outcomes.
  • Deep Learning: Deep learning involves neural networks with multiple layers, enabling complex pattern recognition.

Evaluation Metrics

Assessing the performance of machine learning models requires understanding key metrics:

  • Accuracy: The ratio of correct predictions to the total number of predictions.
  • Precision and Recall: Precision measures the accuracy of positive predictions, while recall assesses the ability to capture all relevant instances.
  • F1 Score: The harmonic mean of precision and recall, providing a balanced evaluation metric.
  • Loss: loss is a number indicating how bad the model's prediction was on a single example.

Common Algorithms

  • Linear Regression: A basic algorithm for predicting a continuous outcome based on linear relationships.
  • Decision Trees: Tree-like models that make decisions based on feature splits.
  • Random Forest: An ensemble of decision trees for improved accuracy and robustness.
  • Support Vector Machines (SVM): Algorithms that find the optimal hyperplane for classification tasks.
  • Backpropagation: Backpropagation is a form of supervised learning algorithm, mostly used to train feedforward neural networks.

Advanced Concepts

  • Neural Networks: Mimicking the human brain, neural networks consist of interconnected nodes that process information.
  • Transfer Learning: Leveraging pre-trained models for new tasks, saving time and resources.
  • Gradient Descent: An optimization algorithm that minimizes the error of a model during training.
  • Artificial Neural Network (ANN): Artificial neural network (ANN) is a computing system with an architecture inspired by the biological brains of living organisms.




Erika Oliver

About Erika Oliver

Erika Oliver is a successful entrepreuner. She is the founder of Acme Inc, a bootstrapped business that builds affordable SaaS tools for local news, indie publishers, and other small businesses.

Copyright © 2024 Stablo. All rights reserved.
Made by Stablo