Bab 13: Topik Lanjutan & Frontier Penelitian

Multimodal Learning, Few-Shot Learning, Federated Learning, RL, Meta-Learning & Continuous Learning

Bab 13: Topik Lanjutan & Frontier Penelitian

🎯 Hasil Pembelajaran (Learning Outcomes)

Setelah mempelajari bab ini, Anda akan mampu:

  1. Memahami emerging research areas dalam machine learning modern
  2. Menjelaskan multimodal learning dan integrasi vision-language
  3. Mengidentifikasi few-shot dan zero-shot learning paradigms
  4. Menganalisis federated learning untuk privacy-preserving ML
  5. Memahami fondasi reinforcement learning dan applications-nya
  6. Menjelaskan meta-learning dan learning-to-learn principles
  7. Menerapkan strategi continuous learning mengikuti ML research trends
  8. Membaca & menganalisis ML research papers secara kritis
  9. Berkontribusi pada komunitas ML research

13.1 Multimodal Learning: Menyatukan Vision & Language

13.1.1 Mengapa Multimodal?

Manusia memahami dunia melalui multiple modalities secara bersamaan:

  • Membaca teks sambil melihat gambar
  • Mendengar audio sambil melihat video
  • Kombinasi sensor berbeda

Multimodal Learning = Model yang memproses dan mengintegrasikan multiple data types.

Key Applications:

  • Vision-Language Models (CLIP, BLIP)
  • Visual Question Answering (VQA)
  • Image captioning
  • Video understanding dengan audio
  • Autonomous vehicles (fusion dari camera, lidar, radar)

13.1.2 Vision-Language Models: CLIP

CLIP (Contrastive Language-Image Pre-training, Radford et al. 2021) adalah breakthrough dalam multimodal learning.

Architecture:

Text Input ──> Text Encoder (Transformer) ──> Text Embedding
                                                  ↓
                                            [Contrastive Loss]
                                                  ↑
Image Input ──> Image Encoder (ViT) ────> Image Embedding

Key Idea: Learn aligned embeddings untuk images dan text descriptions.

Contrastive Loss:

\[\mathcal{L} = -\log \frac{e^{\text{sim}(I, T)/\tau}}{\sum_j e^{\text{sim}(I, T_j)/\tau}}\]

Maksimalkan similarity antara matched pairs, minimize untuk unmatched pairs.

Implementation dengan Hugging Face:

from transformers import CLIPProcessor, CLIPModel
from PIL import Image
import requests

# Load model
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

# Image
image_url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)

# Text candidates
texts = [
    "a cat on a couch",
    "a dog running in field",
    "a bird flying",
    "children playing soccer"
]

# Process
inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)

# Forward pass
outputs = model(**inputs)

# Get similarity scores
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)

# Print results
for i, text in enumerate(texts):
    print(f"{text}: {probs[0][i].item():.4f}")
πŸ’‘ Zero-Shot Classification dengan CLIP

CLIP memungkinkan classification tanpa fine-tuning dengan hanya memberikan text descriptions!

# Untuk custom classification
labels = ["cat", "dog", "bird"]
text_descriptions = [f"a photo of a {label}" for label in labels]

Ini adalah zero-shot learning - model dapat classify kategori yang tidak pernah dilihat sebelumnya!

13.1.3 Practical Challenges

Challenges dalam multimodal learning:

  1. Data Alignment: Matching images dengan correct text descriptions
  2. Scalability: Processing multiple modalities requires significant compute
  3. Modality Gap: Different modalities have different representations
  4. Missing Modalities: Handling incomplete data (e.g., no image available)

13.2 Few-Shot & Zero-Shot Learning

13.2.1 The Few-Shot Learning Problem

Traditional ML: Requires thousands of labeled examples

Few-shot Learning: Learn dari very limited examples (1-5 per class)

Real-world motivation:

  • Rare diseases dalam medical imaging
  • Fraud detection (limited fraud examples)
  • New product recognition dalam e-commerce
  • Cybersecurity threats (emerging malware)

13.2.2 Approaches ke Few-Shot Learning

1. Transfer Learning + Fine-tuning:

Pre-trained Model ──> Fine-tune dengan few examples ──> Prediction

Paling praktis dan sering berhasil!

2. Metric Learning (Prototypical Networks):

Learn distance metric ──> Compare test sample ke prototypes ──> Classify

3. Meta-Learning:

Learn how to learn ──> Adapt quickly ke new task ──> Prediction

Comparison Table:

Approach Complexity Data Required Performance Industry Use
Transfer + FT Low 5-50 examples Good Very Common
Prototypical Nets Medium 1-5 examples Very Good Emerging
Meta-Learning High 1-5 examples Excellent Research

13.2.3 Zero-Shot Learning

Zero-shot Learning: Classify categories never seen during training

Example:

Trained on: cat, dog, bird
Test on: tiger, wolf, eagle (completely new!)

How it works:

  1. Semantic Attributes: Describe categories menggunakan shared attributes
    • Has_fur, Has_wings, Is_predator, etc.
  2. Word Embeddings: Use pre-trained embeddings (Word2Vec, GloVe)
    • Tiger embedding ← (cat + predator)
    • Eagle embedding ← (bird + predator)
  3. Knowledge Graphs: Leverage semantic relationships

CLIP for Zero-Shot Classification:

from transformers import CLIPProcessor, CLIPModel
import torch

model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

def zero_shot_classify(image, candidate_labels):
    """
    Classify image menggunakan any text labels
    (including labels not in training set!)
    """
    # Prepare texts
    texts = [f"a photo of a {label}" for label in candidate_labels]

    inputs = processor(text=texts, images=image,
                      return_tensors="pt", padding=True)

    with torch.no_grad():
        outputs = model(**inputs)

    logits = outputs.logits_per_image
    probs = torch.softmax(logits, dim=1)

    result = {
        candidate_labels[i]: probs[0][i].item()
        for i in range(len(candidate_labels))
    }

    return sorted(result.items(), key=lambda x: x[1], reverse=True)

# Use with ANY labels
image = Image.open("sample.jpg")
results = zero_shot_classify(image, ["cat", "dog", "tiger", "wolf"])
⚠️ Few-Shot Learning Pitfalls
  1. Distribution Shift: Few examples mungkin tidak representative
  2. Overfitting: Model bisa memorize few examples
  3. Label Quality: Every example counts - labeling errors are critical
  4. Domain Gap: Pre-trained model harus related ke target domain

Best Practices:

  • Validate dengan k-fold cross-validation
  • Ensure diverse examples dalam few-shot set
  • Combine dengan data augmentation

13.3 Federated Learning: Privacy-Preserving ML

13.3.1 The Federated Learning Problem

Traditional ML Pipeline:

Collect Data β†’ Send to Server β†’ Train Model β†’ Deploy
                (Privacy Risk!)

Federated Learning:

User 1: Train locally
User 2: Train locally
User 3: Train locally
        ↓
   Merge updates β†’ Server aggregates ↓
        ↓
   Send new model back

Model trainable tanpa collecting raw data!

13.3.2 Key Applications

  1. Mobile Keyboard Prediction
    • Train on user’s device
    • Never send text messages to server
    • Google, Apple, etc.
  2. Healthcare
    • Hospital A trains locally
    • Hospital B trains locally
    • Share only model updates
    • Patient privacy preserved
  3. Financial Services
    • Bank branches train locally
    • Fraud detection models improve
    • Raw transaction data stays local
  4. Cybersecurity
    • Organizations train threat detection models
    • Share intelligence without exposing data

13.3.3 Federated Averaging (FedAvg)

Algorithm:

Server:
  1. Initialize model w
  2. For each round:
     a. Select random subset of clients
     b. Send current model to clients
     c. Clients compute gradients locally
     d. Clients send gradients back
     e. Server averages gradients
     f. Update model: w = w - Ξ± * avg_gradients

Client:
  1. Receive model from server
  2. Train locally untuk E epochs
  3. Send gradients (or model update) back

Mathematical:

\[w^{t+1} = w^t - \alpha \frac{1}{K} \sum_{k=1}^{K} \nabla L_k(w^t)\]

Where:

  • \(K\): number of clients
  • \(\nabla L_k\): gradient dari client \(k\)
  • \(\alpha\): learning rate

13.3.4 Challenges & Solutions

Communication Cost:

  • Problem: Sending gradients repeatedly adalah expensive
  • Solution: Gradient compression, quantization

Stragglers:

  • Problem: Slow devices hold up entire system
  • Solution: Asynchronous updates, adaptive scheduling

Privacy:

  • Problem: Gradients bisa leak information
  • Solution: Differential privacy, secure aggregation

Heterogeneity:

  • Problem: Non-IID data across clients
  • Solution: FedProx (proximal term), personalization
πŸ“Œ Federated Learning untuk Cybersecurity

Use Case: Network Intrusion Detection

Organization A (Banking):
  └─> Train on banking fraud patterns

Organization B (Retail):
  └─> Train on retail fraud patterns

Organization C (Government):
  └─> Train on government cyber attacks

Combined model: Lebih baik mendeteksi semua jenis threats!
Semua raw data tetap privat.

13.4 Reinforcement Learning Basics

13.4.1 What is RL?

RL Philosophy: Learn by interacting dengan environment, receive rewards.

Classic Problems:

  • Game playing (Chess, Go, video games)
  • Robot control (walking, grasping)
  • Autonomous driving
  • Resource allocation
  • Cybersecurity (automated defense)

RL vs Supervised Learning:

Aspect Supervised RL
Data Labeled examples Reward signals
Feedback Immediate (training data) Delayed (after actions)
Exploration No Essential
Goal Predict accurately Maximize cumulative reward

13.4.2 RL Fundamentals

Key Components:

  1. Agent: Pembuat keputusan
  2. Environment: World yang agent interact dengan
  3. Action: Apa yang agent bisa lakukan
  4. State: Kondisi environment
  5. Reward: Feedback signal

Markov Decision Process (MDP):

State (s) ──> Agent ──> Action (a) ──> Environment ──> Reward (r), Next State (s')

Objective: Maximize cumulative reward:

\[G_t = r_t + \gamma r_{t+1} + \gamma^2 r_{t+2} + ...\]

Where \(\gamma \in [0,1]\) adalah discount factor.

13.4.3 Two Main Approaches

1. Value-Based RL:

  • Learn value function \(V(s)\) = expected reward dari state \(s\)
  • Choose action dengan highest value
  • Algorithm: Q-learning, DQN

2. Policy-Based RL:

  • Learn policy \(\pi(a|s)\) = probability distribution over actions
  • Directly optimize policy untuk maximize rewards
  • Algorithm: Policy Gradient, PPO

Simple Q-Learning Example:

import numpy as np

class SimpleGridworld:
    """1D gridworld: move left/right to reach goal"""

    def __init__(self, size=5):
        self.size = size
        self.goal = size - 1
        self.agent_pos = 0

    def reset(self):
        self.agent_pos = 0
        return self.agent_pos

    def step(self, action):
        """action: 0=left, 1=right"""
        if action == 1:  # Move right
            self.agent_pos = min(self.agent_pos + 1, self.size - 1)
        else:  # Move left
            self.agent_pos = max(self.agent_pos - 1, 0)

        # Reward
        reward = 1.0 if self.agent_pos == self.goal else -0.01
        done = (self.agent_pos == self.goal)

        return self.agent_pos, reward, done

# Q-Learning
env = SimpleGridworld(size=5)
Q = np.zeros((5, 2))  # Q[state, action]

# Hyperparameters
alpha = 0.1  # learning rate
gamma = 0.9  # discount factor
epsilon = 0.1  # exploration rate

# Training
for episode in range(100):
    state = env.reset()
    done = False

    while not done:
        # Epsilon-greedy action
        if np.random.random() < epsilon:
            action = np.random.randint(0, 2)  # Explore
        else:
            action = np.argmax(Q[state, :])  # Exploit

        # Take action
        next_state, reward, done = env.step(action)

        # Q-learning update
        Q[state, action] = Q[state, action] + alpha * (
            reward + gamma * np.max(Q[next_state, :]) - Q[state, action]
        )

        state = next_state

print("Learned Q-values:")
print(Q)
print("\nPolicy (best action per state):")
for s in range(5):
    action = np.argmax(Q[s, :])
    print(f"State {s}: {'Right' if action == 1 else 'Left'}")

13.4.4 Deep RL Applications

Deep Q-Network (DQN) combines Q-learning with deep neural networks.

Atari Games Example:

  • Input: Raw pixels (210Γ—160Γ—3)
  • Output: Q-values untuk each action
  • Breakthrough: DQN beat human experts di Atari games (2015)

Modern RL Frameworks:

  • OpenAI Gym: Standard environments
  • Stable Baselines3: Easy-to-use RL algorithms
  • RLlib: Scalable RL library
πŸ’‘ RL untuk Cybersecurity

Intrusion Response Automation:

State: Current network status, alerts
Actions: Block IP, isolate device, increase monitoring
Rewards: Detect malicious activity quickly, minimize false positives

Benefits:

  • Faster response than humans
  • Learn dari past incidents
  • Adapt ke new threats

13.5 Meta-Learning: Learning to Learn

13.5.1 What is Meta-Learning?

Meta-Learning = Learning algorithm yang improve dengan lebih banyak learning experiences.

Intuition:

  • Manusia bisa learn new tasks dengan cepat
  • Berkat experience di task-task sebelumnya
  • Transfer learning di level algorithm

Applications:

  • Few-shot learning (learn class dari 1-5 examples)
  • Multi-task learning (transfer across tasks)
  • Domain adaptation
  • Hyperparameter optimization

13.5.2 Model-Agnostic Meta-Learning (MAML)

MAML adalah influential meta-learning algorithm.

Idea: Find initialization \(\theta\) sehingga gradient descent cepat converge pada new tasks.

Algorithm:

1. Initialize model parameters ΞΈ
2. For each meta-training iteration:
   a. Sample batch dari tasks
   b. For each task:
      i. Compute inner gradient: ΞΈ' = ΞΈ - Ξ±βˆ‡L_task(ΞΈ)
      ii. Compute meta-loss pada updated params: L_meta(ΞΈ')
   c. Compute meta-gradient: βˆ‡L_meta(ΞΈ) (terhadap original ΞΈ)
   d. Update: ΞΈ = ΞΈ - Ξ²βˆ‡L_meta(ΞΈ)

Visualization:

Task 1: ──> Inner step (1 gradient) ──> Evaluate on task 1
Task 2: ──> Inner step (1 gradient) ──> Evaluate on task 2
Task 3: ──> Inner step (1 gradient) ──> Evaluate on task 3
            ↓
       Meta-update (update initialization ΞΈ)

13.5.3 Practical Implementation

import torch
import torch.nn as nn

class MAMLModel(nn.Module):
    """Simple model untuk meta-learning"""
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(20, 64)
        self.fc2 = nn.Linear(64, 64)
        self.fc3 = nn.Linear(64, 5)  # 5-way classification

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        return self.fc3(x)

def maml_inner_loop(model, task_batch, inner_lr, inner_steps):
    """
    Simulate inner gradient steps untuk satu task
    """
    # Clone model parameters
    adapted_params = [p.clone() for p in model.parameters()]

    # Inner loop: few gradient steps
    for _ in range(inner_steps):
        support_images, support_labels = task_batch
        logits = model(support_images)
        loss = nn.functional.cross_entropy(logits, support_labels)

        # Compute gradients
        grads = torch.autograd.grad(loss, model.parameters())

        # Update parameters
        adapted_params = [
            p - inner_lr * g
            for p, g in zip(adapted_params, grads)
        ]

    return adapted_params

# Meta-training
model = MAMLModel()
meta_lr = 0.001
inner_lr = 0.01

# ... Training loop dengan multiple tasks ...

13.6 Cara Membaca & Memahami ML Research Papers

13.6.1 Structure dari ML Paper

Typical structure:

  1. Abstract (4-6 sentences)
    • Problem statement
    • Proposed method
    • Key results
  2. Introduction
    • Context dan motivation
    • Prior work limitations
    • Contributions
  3. Related Work
    • Literature review
    • Positioning terhadap existing methods
  4. Methods/Approach
    • Technical details
    • Algorithms, equations
    • Mengapa approach ini make sense
  5. Experiments
    • Datasets
    • Baselines
    • Results
    • Ablation studies
  6. Results & Discussion
    • Analysis
    • Limitations
    • Future work
  7. References

13.6.2 Reading Strategy

First Pass (15 mins):

  1. Read title
  2. Skim abstract
  3. Look at figures and tables
  4. Read conclusion

Goal: Understand what problem is solved.

Second Pass (30 mins):

  1. Read introduction carefully
  2. Read method section
  3. Try to understand key ideas

Goal: Understand HOW problem is solved.

Third Pass (60 mins):

  1. Carefully read methods
  2. Check equations and proofs
  3. Study experimental setup
  4. Analyze results

Goal: Reproduce or extend the work.

13.6.3 Critical Questions to Ask

Understanding:

  • What is the core contribution?
  • What problem does it solve?
  • Why is it important?

Technical:

  • Are the assumptions reasonable?
  • Are the equations correct?
  • Is the proof sound?

Experimental:

  • Are baselines fairly compared?
  • Are results statistically significant?
  • What about edge cases?

Practical:

  • How would I implement this?
  • What are the computational costs?
  • When would I use this vs alternatives?

13.6.4 Taking Notes

Effective note template:

PAPER METADATA
─────────────
Title:
Authors:
Venue: (Conference/Journal)
Year:
PDF Link:

CORE CONTRIBUTION
─────────────────
Main idea:
Problem solved:
Key novelty:

TECHNICAL DETAILS
─────────────────
Main equation:
Key insight:
Assumptions:

EXPERIMENTS
──────────
Datasets used:
Baselines compared:
Main result:
Ablation study findings:

STRENGTHS
────────
1.
2.
3.

WEAKNESSES
──────────
1.
2.
3.

IMPLEMENTATION NOTES
───────────────────
Key code/algorithm:
Hyperparameters:
Tricks used:

FUTURE WORK
───────────
- How can this be improved?
- What are limitations?

13.7 Mengikuti ML Research Community

13.7.1 Major Venues

Top Tier Conferences:

Conference Field When Deadline
NeurIPS General ML December Spring
ICML General ML July January
ICLR Learning Representations May October
CVPR Computer Vision June November
ICCV Computer Vision October March
ECCV Computer Vision September March
ACL NLP May October
EMNLP NLP October May
AAAI General AI February August

Top Journals:

  • IEEE Transactions on Pattern Analysis & Machine Intelligence
  • Journal of Machine Learning Research
  • Nature Machine Intelligence

13.7.2 Staying Updated

Platforms:

  1. arXiv.org (Essential!)
    • Preprints dalam CS, khususnya ML
    • Free access ke papers
    • Organized by categories (cs.LG, cs.CV, cs.CL)
    • Subscribe ke daily digests
  2. Papers with Code
    • Links papers ke implementations
    • Leaderboards untuk benchmarks
    • Code reproducibility
  3. Hugging Face Blog
    • Accessible explanations
    • Implementation tutorials
    • Latest models
  4. Twitter/X & Reddit
    • Researchers share findings
    • r/MachineLearning, r/LanguageModels
    • Be critical of social media claims
  5. Conferences
    • NeurIPS, ICML, CVPR websites
    • Attend talks, workshops
    • Network dengan researchers

13.7.3 Building Reading Habit

Recommended Schedule:

Per Week:
- 2-3 papers dari arXiv daily digest
- 1 deep dive into conference paper
- 1 implementation paper from Papers with Code

Per Month:
- Attend 1 seminar/webinar
- Read 1 blog post explaining recent trends
- Implement 1 technique from recent paper

Per Year:
- Follow 2-3 new research directions
- Implement full project using recent techniques
- Contribute to open-source ML projects

Tips:

  • Start dengan survey papers untuk overview
  • Read papers dari pioneering authors
  • Focus pada papers related to your interests
  • Don’t just read - try implementing!

13.9 Ringkasan & Best Practices

13.9.1 Key Takeaways

Emerging Areas:

  • Multimodal Learning: Integrate vision, language, audio
  • Few-Shot Learning: Learn dari minimal examples
  • Federated Learning: Train without sharing raw data
  • RL: Learn by interacting dengan environment
  • Meta-Learning: Learn how to learn

Research Skills:

  • Reading papers strategically (3-pass approach)
  • Following research communities
  • Building continuous learning habit
  • Understanding research frontiers

Practical Applications:

  • Vision-Language models untuk diverse tasks
  • Privacy-preserving learning untuk sensitive data
  • RL agents untuk autonomous systems
  • Few-shot learning untuk rare scenarios
  • Meta-learning untuk quick adaptation

13.9.2 Cybersecurity Applications

Direct Applications:

Challenge ML Technique Why
Rare malware detection Few-shot learning Limited labeled samples
Privacy-preserving threat intel Federated learning Organizations won’t share data
Automated incident response Reinforcement learning Requires real-time decisions
Zero-day vulnerability detection Anomaly detection + Meta-learning Never-seen-before threats
Multi-modal threat detection Multimodal learning Network + system logs + alerts

13.10 Soal Latihan & Projek

πŸ“ Soal Latihan Konseptual
  1. Jelaskan bagaimana CLIP memungkinkan zero-shot classification. Mengapa ini powerful?

  2. Dalam federated learning, mengapa kita tidak langsung mengumpulkan data ke server?

  3. Apa perbedaan fundamental antara supervised learning dan reinforcement learning?

  4. Meta-learning disebut β€œlearning to learn”. Jelaskan konsep ini dengan contoh.

  5. Bagaimana few-shot learning berbeda dari transfer learning?

  6. Jelaskan 3 strategi untuk membaca paper secara efisien.

  7. Apa tantangan utama dalam multimodal learning?

  8. Mengapa federated learning penting untuk cybersecurity?

  9. Bagaimana Q-learning berbeda dari supervised learning?

  10. Sebutkan 3 open problems dalam ML research dan mengapa penting.

πŸ”¬ Proyek: Continuous Learning Journey

Objective: Establish personal ML research practice

Tasks:

Fase 1: Build Infrastructure (Week 1-2) - [ ] Setup arXiv.org account, subscribe ke daily digest - [ ] Create folder/system untuk saving papers - [ ] Install Papers with Code bookmarklet - [ ] Join 2 ML communities (Reddit, Discord, etc.)

Fase 2: Read & Implement (Week 3-4) - [ ] Select 1 research area of interest - [ ] Read 5 foundational papers (use 3-pass method) - [ ] Take structured notes untuk each paper - [ ] Implement key technique dari 1 paper

Fase 3: Contribute (Week 5+) - [ ] Star/fork 1 related Github project - [ ] Read 5 more recent papers - [ ] Consider: Could I improve this? - [ ] Create blog post explaining paper (teach others!) - [ ] Submit issue/PR ke project

Deliverables:

  1. Paper notes folder (5+ papers dengan structured notes)
  2. Implementation code (with comments explaining technique)
  3. Blog post atau README (explaining to others)
  4. Github activity (fork, star, or contribute)

Rubric:

  • Paper selection relevance (5 pts)
  • Quality of notes/understanding (5 pts)
  • Implementation correctness (5 pts)
  • Clear documentation (5 pts)
  • Community engagement (5 pts)

Total: 25 points


13.11 Referensi & Bacaan Lebih Lanjut

Papers & Research:

  • Radford et al., 2021. β€œLearning Transferable Visual Models From Natural Language Supervision” (ICML) - CLIP paper
  • Finn et al., 2017. β€œModel-Agnostic Meta-Learning for Fast Adaptation of Deep Networks” (ICML) - MAML
  • McMahan et al., 2017. β€œCommunication-Efficient Learning of Deep Networks from Decentralized Data” (AISTATS) - Federated Learning
  • Devlin et al., 2019. β€œBERT: Pre-training of Deep Bidirectional Transformers” (NAACL)
  • Mnih et al., 2015. β€œHuman-level control through deep reinforcement learning” (Nature) - DQN

Online Resources:

Tools & Libraries:

Conferences to Follow:

  • NeurIPS (December)
  • ICML (July)
  • ICLR (May)
  • CVPR (June)
  • ACL (May)
  • EMNLP (October)

Hubungan dengan Learning Outcomes Program

CPMK Sub-CPMK Tercakup
CPMK-1 Understand emerging ML areas βœ“
CPMK-1 Understand continuous learning strategies βœ“
CPMK-4 Create research practice & skill development βœ“

Related Labs: Lab 13 (Research Paper Analysis & Implementation Project) Related Chapters: Chapter 8-12 (Foundation untuk understanding advanced topics) Estimated Reading Time: 90 minutes Estimated Practice Time: 10+ hours (ongoing research journey)


Last Updated: December 2024 Version: 1.0