Lab 6: Image Classification dengan CNN dan Transfer Learning

Klasifikasi Gambar CIFAR-10 menggunakan Convolutional Neural Networks

Author

Pembelajaran Mesin - Data Science for Cybersecurity

Published

December 15, 2025

17 Pendahuluan

17.1 Tujuan Pembelajaran

Setelah menyelesaikan lab ini, Anda diharapkan dapat:

  1. Memahami arsitektur CNN untuk klasifikasi gambar
  2. Membangun CNN dari awal dengan berbagai kompleksitas
  3. Menerapkan data augmentation untuk meningkatkan generalisasi
  4. Mengimplementasikan transfer learning dengan model pre-trained
  5. Melakukan fine-tuning pada model pre-trained
  6. Membandingkan berbagai pendekatan CNN dan transfer learning
  7. Memvisualisasikan feature maps dan aktivasi CNN
  8. Mengoptimasi performa model untuk akurasi tinggi

17.2 Gambaran Umum Lab

Pada lab ini, Anda akan bekerja dengan dataset CIFAR-10, yang merupakan dataset klasifikasi gambar berwarna berisi 60,000 gambar berukuran 32×32 piksel dalam 10 kelas berbeda.

17.2.1 Dataset CIFAR-10

CIFAR-10 (Canadian Institute For Advanced Research) adalah dataset yang sangat populer dalam computer vision:

  • Ukuran total: 60,000 gambar berwarna (RGB)
  • Resolusi: 32 × 32 piksel
  • Jumlah kelas: 10 kategori
  • Training set: 50,000 gambar
  • Test set: 10,000 gambar
  • Distribusi: Seimbang (6,000 gambar per kelas)

10 Kelas dalam CIFAR-10:

  1. ✈️ Airplane (pesawat)
  2. 🚗 Automobile (mobil)
  3. 🐦 Bird (burung)
  4. 🐱 Cat (kucing)
  5. 🦌 Deer (rusa)
  6. 🐕 Dog (anjing)
  7. 🐸 Frog (katak)
  8. 🐴 Horse (kuda)
  9. 🚢 Ship (kapal)
  10. 🚚 Truck (truk)

17.2.2 Pendekatan yang Akan Dipelajari

Dalam lab ini, kita akan mengeksplorasi berbagai pendekatan:

graph TD
    A[CIFAR-10 Dataset] --> B[Part 1: Data Exploration]
    B --> C[Part 2: CNN from Scratch]
    B --> D[Part 3: Transfer Learning]
    C --> C1[Simple CNN]
    C --> C2[VGG-style CNN]
    C --> C3[Data Augmentation]
    D --> D1[Feature Extraction]
    D --> D2[Fine-tuning]
    D1 --> E[Part 4: Advanced Techniques]
    D2 --> E
    E --> E1[ResNet50]
    E --> E2[Model Ensemble]
    E --> E3[Grad-CAM]
    E1 --> F[Final Comparison]
    E2 --> F
    E3 --> F

graph TD
    A[CIFAR-10 Dataset] --> B[Part 1: Data Exploration]
    B --> C[Part 2: CNN from Scratch]
    B --> D[Part 3: Transfer Learning]
    C --> C1[Simple CNN]
    C --> C2[VGG-style CNN]
    C --> C3[Data Augmentation]
    D --> D1[Feature Extraction]
    D --> D2[Fine-tuning]
    D1 --> E[Part 4: Advanced Techniques]
    D2 --> E
    E --> E1[ResNet50]
    E --> E2[Model Ensemble]
    E --> E3[Grad-CAM]
    E1 --> F[Final Comparison]
    E2 --> F
    E3 --> F

17.3 Persiapan Environment

17.3.1 Import Libraries

# Import library dasar
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pathlib import Path
import warnings
warnings.filterwarnings('ignore')

# Import TensorFlow/Keras
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import VGG16, ResNet50
from tensorflow.keras.callbacks import (
    ModelCheckpoint, EarlyStopping,
    ReduceLROnPlateau, TensorBoard
)

# Import scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.metrics import (
    classification_report, confusion_matrix,
    accuracy_score, precision_recall_fscore_support
)

# Import untuk visualisasi
import cv2
from PIL import Image

# Set random seed untuk reproducibility
np.random.seed(42)
tf.random.set_seed(42)

print(f"TensorFlow version: {tf.__version__}")
print(f"Keras version: {keras.__version__}")
print(f"GPU available: {tf.config.list_physical_devices('GPU')}")

17.3.2 Konfigurasi GPU (Opsional)

# Cek dan konfigurasi GPU jika tersedia
def setup_gpu():
    """Setup GPU untuk training yang lebih efisien"""
    gpus = tf.config.list_physical_devices('GPU')

    if gpus:
        try:
            # Set memory growth untuk menghindari OOM
            for gpu in gpus:
                tf.config.experimental.set_memory_growth(gpu, True)

            print(f"✓ {len(gpus)} GPU ditemukan dan dikonfigurasi")
            print(f"  GPU devices: {[gpu.name for gpu in gpus]}")

            # Set mixed precision untuk training lebih cepat
            policy = tf.keras.mixed_precision.Policy('mixed_float16')
            tf.keras.mixed_precision.set_global_policy(policy)
            print(f"✓ Mixed precision enabled: {policy.name}")

        except RuntimeError as e:
            print(f"✗ GPU configuration error: {e}")
    else:
        print("⚠ No GPU found. Training will use CPU (slower)")

setup_gpu()

17.3.3 Setup Direktori

# Buat direktori untuk menyimpan model dan hasil
dirs = {
    'models': Path('models'),
    'checkpoints': Path('checkpoints'),
    'figures': Path('figures'),
    'logs': Path('logs'),
    'predictions': Path('predictions')
}

for name, path in dirs.items():
    path.mkdir(exist_ok=True, parents=True)
    print(f"✓ Directory created: {path}")

17.3.4 Konstanta Global

# Konstanta untuk CIFAR-10
IMG_HEIGHT = 32
IMG_WIDTH = 32
IMG_CHANNELS = 3
NUM_CLASSES = 10
BATCH_SIZE = 128
EPOCHS = 50

# Nama kelas CIFAR-10
CLASS_NAMES = [
    'airplane', 'automobile', 'bird', 'cat', 'deer',
    'dog', 'frog', 'horse', 'ship', 'truck'
]

# Nama kelas dalam Bahasa Indonesia
CLASS_NAMES_ID = [
    'Pesawat', 'Mobil', 'Burung', 'Kucing', 'Rusa',
    'Anjing', 'Katak', 'Kuda', 'Kapal', 'Truk'
]

print("Konfigurasi Dataset:")
print(f"  Image size: {IMG_HEIGHT}×{IMG_WIDTH}×{IMG_CHANNELS}")
print(f"  Number of classes: {NUM_CLASSES}")
print(f"  Batch size: {BATCH_SIZE}")
print(f"  Training epochs: {EPOCHS}")

18 Part 1: Data Loading dan Exploration

18.1 Load Dataset CIFAR-10

CIFAR-10 tersedia langsung dari Keras datasets, sehingga mudah untuk diload.

# Load CIFAR-10 dataset
def load_cifar10():
    """
    Load CIFAR-10 dataset dari Keras

    Returns:
        tuple: (X_train, y_train), (X_test, y_test)
    """
    print("Loading CIFAR-10 dataset...")
    (X_train, y_train), (X_test, y_test) = keras.datasets.cifar10.load_data()

    print(f"\n✓ Dataset loaded successfully!")
    print(f"  Training set: {X_train.shape[0]:,} images")
    print(f"  Test set: {X_test.shape[0]:,} images")
    print(f"  Image shape: {X_train.shape[1:]}")
    print(f"  Label shape: {y_train.shape}")

    return (X_train, y_train), (X_test, y_test)

# Load data
(X_train_raw, y_train_raw), (X_test_raw, y_test_raw) = load_cifar10()

18.2 Exploratory Data Analysis

18.2.1 Informasi Dataset

def display_dataset_info(X_train, y_train, X_test, y_test):
    """Tampilkan informasi lengkap tentang dataset"""

    print("=" * 70)
    print("CIFAR-10 DATASET INFORMATION")
    print("=" * 70)

    # Basic info
    print(f"\n1. DIMENSI DATA:")
    print(f"   Training images: {X_train.shape}")
    print(f"   Training labels: {y_train.shape}")
    print(f"   Test images: {X_test.shape}")
    print(f"   Test labels: {y_test.shape}")

    # Memory usage
    train_size_mb = X_train.nbytes / (1024**2)
    test_size_mb = X_test.nbytes / (1024**2)
    print(f"\n2. MEMORY USAGE:")
    print(f"   Training data: {train_size_mb:.2f} MB")
    print(f"   Test data: {test_size_mb:.2f} MB")
    print(f"   Total: {train_size_mb + test_size_mb:.2f} MB")

    # Data type and range
    print(f"\n3. DATA TYPE:")
    print(f"   Image dtype: {X_train.dtype}")
    print(f"   Label dtype: {y_train.dtype}")
    print(f"   Pixel value range: [{X_train.min()}, {X_train.max()}]")

    # Class distribution
    print(f"\n4. CLASS DISTRIBUTION:")
    unique, counts = np.unique(y_train, return_counts=True)
    for class_id, count in zip(unique, counts):
        class_name = CLASS_NAMES[class_id[0]]
        percentage = (count / len(y_train)) * 100
        print(f"   {class_id[0]}: {class_name:12s} - {count:5,} images ({percentage:.2f}%)")

    print("=" * 70)

display_dataset_info(X_train_raw, y_train_raw, X_test_raw, y_test_raw)

18.2.2 Visualisasi Distribusi Kelas

def plot_class_distribution(y_train, y_test, save_path=None):
    """Plot distribusi kelas untuk training dan test set"""

    fig, axes = plt.subplots(1, 2, figsize=(15, 5))

    # Training set distribution
    unique_train, counts_train = np.unique(y_train, return_counts=True)
    axes[0].bar(unique_train.flatten(), counts_train, color='steelblue', alpha=0.8)
    axes[0].set_xlabel('Class ID', fontsize=12, fontweight='bold')
    axes[0].set_ylabel('Number of Images', fontsize=12, fontweight='bold')
    axes[0].set_title('Training Set - Class Distribution', fontsize=14, fontweight='bold')
    axes[0].set_xticks(range(NUM_CLASSES))
    axes[0].set_xticklabels(CLASS_NAMES, rotation=45, ha='right')
    axes[0].grid(axis='y', alpha=0.3)

    # Add count labels
    for i, count in enumerate(counts_train):
        axes[0].text(i, count + 100, str(count), ha='center', va='bottom', fontweight='bold')

    # Test set distribution
    unique_test, counts_test = np.unique(y_test, return_counts=True)
    axes[1].bar(unique_test.flatten(), counts_test, color='coral', alpha=0.8)
    axes[1].set_xlabel('Class ID', fontsize=12, fontweight='bold')
    axes[1].set_ylabel('Number of Images', fontsize=12, fontweight='bold')
    axes[1].set_title('Test Set - Class Distribution', fontsize=14, fontweight='bold')
    axes[1].set_xticks(range(NUM_CLASSES))
    axes[1].set_xticklabels(CLASS_NAMES, rotation=45, ha='right')
    axes[1].grid(axis='y', alpha=0.3)

    # Add count labels
    for i, count in enumerate(counts_test):
        axes[1].text(i, count + 20, str(count), ha='center', va='bottom', fontweight='bold')

    plt.tight_layout()

    if save_path:
        plt.savefig(save_path, dpi=300, bbox_inches='tight')
        print(f"✓ Figure saved to: {save_path}")

    plt.show()

plot_class_distribution(y_train_raw, y_test_raw,
                       save_path=dirs['figures'] / 'class_distribution.png')

18.2.3 Visualisasi Sample Images

def plot_sample_images(X, y, num_samples=20, save_path=None):
    """
    Plot sample images dari setiap kelas

    Parameters:
        X: array gambar
        y: array label
        num_samples: jumlah sample per kelas
        save_path: path untuk menyimpan figure
    """
    samples_per_class = num_samples // NUM_CLASSES

    fig, axes = plt.subplots(NUM_CLASSES, samples_per_class,
                            figsize=(15, 18))

    fig.suptitle('CIFAR-10 Sample Images per Class',
                 fontsize=16, fontweight='bold', y=0.995)

    for class_id in range(NUM_CLASSES):
        # Ambil indices untuk kelas ini
        class_indices = np.where(y.flatten() == class_id)[0]
        # Random sample
        sample_indices = np.random.choice(class_indices,
                                         samples_per_class,
                                         replace=False)

        for i, idx in enumerate(sample_indices):
            ax = axes[class_id, i]
            ax.imshow(X[idx])
            ax.axis('off')

            if i == 0:
                # Tambahkan label kelas di kolom pertama
                ax.set_ylabel(f'{CLASS_NAMES[class_id]}\n({CLASS_NAMES_ID[class_id]})',
                            fontsize=10, fontweight='bold', rotation=0,
                            ha='right', va='center')

    plt.tight_layout()

    if save_path:
        plt.savefig(save_path, dpi=300, bbox_inches='tight')
        print(f"✓ Figure saved to: {save_path}")

    plt.show()

plot_sample_images(X_train_raw, y_train_raw, num_samples=20,
                  save_path=dirs['figures'] / 'sample_images.png')

18.2.4 Analisis Pixel Values

def analyze_pixel_values(X_train, X_test):
    """Analisis distribusi nilai pixel"""

    print("=" * 70)
    print("PIXEL VALUE ANALYSIS")
    print("=" * 70)

    # Statistics per channel
    channels = ['Red', 'Green', 'Blue']

    for i, channel in enumerate(channels):
        train_mean = X_train[:, :, :, i].mean()
        train_std = X_train[:, :, :, i].std()
        test_mean = X_test[:, :, :, i].mean()
        test_std = X_test[:, :, :, i].std()

        print(f"\n{channel} Channel:")
        print(f"  Train - Mean: {train_mean:.2f}, Std: {train_std:.2f}")
        print(f"  Test  - Mean: {test_mean:.2f}, Std: {test_std:.2f}")

    # Overall statistics
    print(f"\nOverall Statistics:")
    print(f"  Train - Mean: {X_train.mean():.2f}, Std: {X_train.std():.2f}")
    print(f"  Test  - Mean: {X_test.mean():.2f}, Std: {X_test.std():.2f}")

    print("=" * 70)

analyze_pixel_values(X_train_raw, X_test_raw)

18.2.5 Visualisasi Pixel Distribution

def plot_pixel_distribution(X_train, save_path=None):
    """Plot distribusi nilai pixel untuk setiap channel RGB"""

    fig, axes = plt.subplots(1, 3, figsize=(15, 4))
    channels = ['Red', 'Green', 'Blue']
    colors = ['red', 'green', 'blue']

    for i, (channel, color) in enumerate(zip(channels, colors)):
        # Ambil sample untuk efisiensi
        sample_size = min(10000, len(X_train))
        sample_indices = np.random.choice(len(X_train), sample_size, replace=False)
        pixel_values = X_train[sample_indices, :, :, i].flatten()

        axes[i].hist(pixel_values, bins=50, color=color, alpha=0.7, edgecolor='black')
        axes[i].set_xlabel('Pixel Value', fontsize=11, fontweight='bold')
        axes[i].set_ylabel('Frequency', fontsize=11, fontweight='bold')
        axes[i].set_title(f'{channel} Channel Distribution',
                         fontsize=12, fontweight='bold')
        axes[i].grid(axis='y', alpha=0.3)

        # Add statistics
        mean_val = X_train[:, :, :, i].mean()
        std_val = X_train[:, :, :, i].std()
        axes[i].axvline(mean_val, color='black', linestyle='--', linewidth=2,
                       label=f'Mean: {mean_val:.1f}')
        axes[i].legend()

    plt.tight_layout()

    if save_path:
        plt.savefig(save_path, dpi=300, bbox_inches='tight')
        print(f"✓ Figure saved to: {save_path}")

    plt.show()

plot_pixel_distribution(X_train_raw,
                       save_path=dirs['figures'] / 'pixel_distribution.png')

18.3 Data Preprocessing

18.3.1 Normalisasi Data

def normalize_data(X_train, X_test, method='standard'):
    """
    Normalisasi pixel values

    Parameters:
        X_train: training images
        X_test: test images
        method: 'standard' (0-1) atau 'zscore' (mean=0, std=1)

    Returns:
        X_train_norm, X_test_norm
    """
    print(f"Normalizing data using '{method}' method...")

    if method == 'standard':
        # Normalisasi ke rentang [0, 1]
        X_train_norm = X_train.astype('float32') / 255.0
        X_test_norm = X_test.astype('float32') / 255.0
        print(f"  Pixel range: [0, 1]")

    elif method == 'zscore':
        # Z-score normalization
        X_train_norm = X_train.astype('float32')
        X_test_norm = X_test.astype('float32')

        # Hitung mean dan std dari training set
        mean = X_train_norm.mean(axis=(0, 1, 2), keepdims=True)
        std = X_train_norm.std(axis=(0, 1, 2), keepdims=True)

        # Normalisasi
        X_train_norm = (X_train_norm - mean) / (std + 1e-7)
        X_test_norm = (X_test_norm - mean) / (std + 1e-7)
        print(f"  Mean: {mean.flatten()}")
        print(f"  Std: {std.flatten()}")

    else:
        raise ValueError(f"Unknown normalization method: {method}")

    print(f"✓ Normalization complete!")
    print(f"  Train range: [{X_train_norm.min():.3f}, {X_train_norm.max():.3f}]")
    print(f"  Test range: [{X_test_norm.min():.3f}, {X_test_norm.max():.3f}]")

    return X_train_norm, X_test_norm

# Normalisasi data
X_train_norm, X_test_norm = normalize_data(X_train_raw, X_test_raw, method='standard')

18.3.2 One-Hot Encoding Labels

def encode_labels(y_train, y_test, num_classes=10):
    """
    One-hot encode labels

    Parameters:
        y_train: training labels
        y_test: test labels
        num_classes: jumlah kelas

    Returns:
        y_train_encoded, y_test_encoded
    """
    print("Encoding labels...")

    # One-hot encoding
    y_train_encoded = keras.utils.to_categorical(y_train, num_classes)
    y_test_encoded = keras.utils.to_categorical(y_test, num_classes)

    print(f"✓ Labels encoded!")
    print(f"  Original shape: {y_train.shape} -> Encoded shape: {y_train_encoded.shape}")
    print(f"  Example: {y_train[0]} -> {y_train_encoded[0]}")

    return y_train_encoded, y_test_encoded

# Encode labels
y_train_encoded, y_test_encoded = encode_labels(y_train_raw, y_test_raw)

18.3.3 Train-Validation Split

def create_validation_split(X_train, y_train, validation_size=0.2, random_state=42):
    """
    Split training data menjadi train dan validation

    Parameters:
        X_train: training images
        y_train: training labels (one-hot encoded)
        validation_size: proporsi validation set
        random_state: random seed

    Returns:
        X_train, X_val, y_train, y_val
    """
    print(f"Creating validation split ({validation_size*100:.0f}%)...")

    X_train_split, X_val, y_train_split, y_val = train_test_split(
        X_train, y_train,
        test_size=validation_size,
        random_state=random_state,
        stratify=np.argmax(y_train, axis=1)  # Stratify untuk balanced split
    )

    print(f"✓ Split complete!")
    print(f"  Training set: {X_train_split.shape[0]:,} images")
    print(f"  Validation set: {X_val.shape[0]:,} images")

    return X_train_split, X_val, y_train_split, y_val

# Buat validation split
X_train, X_val, y_train, y_val = create_validation_split(
    X_train_norm, y_train_encoded, validation_size=0.2
)

18.3.4 Summary Data Preparation

def print_data_summary():
    """Print summary dari semua data yang sudah diproses"""

    print("\n" + "=" * 70)
    print("DATA PREPARATION SUMMARY")
    print("=" * 70)

    datasets = {
        'Training': (X_train, y_train),
        'Validation': (X_val, y_val),
        'Test': (X_test_norm, y_test_encoded)
    }

    for name, (X, y) in datasets.items():
        print(f"\n{name} Set:")
        print(f"  Images: {X.shape}")
        print(f"  Labels: {y.shape}")
        print(f"  Image range: [{X.min():.3f}, {X.max():.3f}]")
        print(f"  Memory: {X.nbytes / (1024**2):.2f} MB")

    print("\n" + "=" * 70)

print_data_summary()

19 Part 2: CNN from Scratch

19.1 Simple CNN Architecture

Kita mulai dengan arsitektur CNN sederhana sebagai baseline.

19.1.1 Build Simple CNN Model

def build_simple_cnn(input_shape=(32, 32, 3), num_classes=10):
    """
    Build simple CNN dengan 3 convolutional blocks

    Architecture:
        Conv(32) -> MaxPool -> Conv(64) -> MaxPool -> Conv(128) -> MaxPool ->
        Flatten -> Dense(128) -> Dropout -> Dense(num_classes)

    Parameters:
        input_shape: shape gambar input
        num_classes: jumlah kelas output

    Returns:
        model: Keras model
    """
    model = models.Sequential(name='SimpleCNN')

    # Block 1: Conv -> ReLU -> MaxPool
    model.add(layers.Conv2D(32, (3, 3), activation='relu',
                           padding='same', input_shape=input_shape,
                           name='conv1'))
    model.add(layers.BatchNormalization(name='bn1'))
    model.add(layers.MaxPooling2D((2, 2), name='pool1'))
    model.add(layers.Dropout(0.25, name='dropout1'))

    # Block 2: Conv -> ReLU -> MaxPool
    model.add(layers.Conv2D(64, (3, 3), activation='relu',
                           padding='same', name='conv2'))
    model.add(layers.BatchNormalization(name='bn2'))
    model.add(layers.MaxPooling2D((2, 2), name='pool2'))
    model.add(layers.Dropout(0.25, name='dropout2'))

    # Block 3: Conv -> ReLU -> MaxPool
    model.add(layers.Conv2D(128, (3, 3), activation='relu',
                           padding='same', name='conv3'))
    model.add(layers.BatchNormalization(name='bn3'))
    model.add(layers.MaxPooling2D((2, 2), name='pool3'))
    model.add(layers.Dropout(0.25, name='dropout3'))

    # Fully connected layers
    model.add(layers.Flatten(name='flatten'))
    model.add(layers.Dense(128, activation='relu', name='fc1'))
    model.add(layers.BatchNormalization(name='bn_fc'))
    model.add(layers.Dropout(0.5, name='dropout_fc'))
    model.add(layers.Dense(num_classes, activation='softmax', name='output'))

    return model

# Build model
simple_cnn = build_simple_cnn()
simple_cnn.summary()

19.1.2 Visualisasi Arsitektur

# Plot model architecture
def plot_model_architecture(model, save_path=None):
    """Plot arsitektur model"""

    keras.utils.plot_model(
        model,
        to_file=save_path if save_path else 'model_architecture.png',
        show_shapes=True,
        show_layer_names=True,
        rankdir='TB',  # Top to Bottom
        expand_nested=True,
        dpi=150
    )

    if save_path:
        print(f"✓ Model architecture saved to: {save_path}")

plot_model_architecture(simple_cnn,
                       save_path=dirs['figures'] / 'simple_cnn_architecture.png')

19.1.3 Compile Model

def compile_model(model, learning_rate=0.001):
    """
    Compile model dengan optimizer, loss, dan metrics

    Parameters:
        model: Keras model
        learning_rate: learning rate untuk optimizer
    """
    optimizer = optimizers.Adam(learning_rate=learning_rate)

    model.compile(
        optimizer=optimizer,
        loss='categorical_crossentropy',
        metrics=['accuracy',
                keras.metrics.TopKCategoricalAccuracy(k=3, name='top3_acc')]
    )

    print(f"✓ Model compiled with:")
    print(f"  Optimizer: Adam (lr={learning_rate})")
    print(f"  Loss: categorical_crossentropy")
    print(f"  Metrics: accuracy, top-3 accuracy")

compile_model(simple_cnn, learning_rate=0.001)

19.1.4 Setup Callbacks

def create_callbacks(model_name, monitor='val_accuracy', patience=10):
    """
    Create callbacks untuk training

    Parameters:
        model_name: nama model untuk checkpoint
        monitor: metric yang dimonitor
        patience: patience untuk early stopping

    Returns:
        list of callbacks
    """
    callbacks = [
        # ModelCheckpoint: simpan best model
        ModelCheckpoint(
            filepath=dirs['checkpoints'] / f'{model_name}_best.h5',
            monitor=monitor,
            mode='max',
            save_best_only=True,
            verbose=1
        ),

        # EarlyStopping: stop training jika tidak ada improvement
        EarlyStopping(
            monitor=monitor,
            mode='max',
            patience=patience,
            restore_best_weights=True,
            verbose=1
        ),

        # ReduceLROnPlateau: reduce learning rate jika plateau
        ReduceLROnPlateau(
            monitor=monitor,
            mode='max',
            factor=0.5,
            patience=5,
            min_lr=1e-7,
            verbose=1
        ),

        # TensorBoard: logging untuk visualisasi
        TensorBoard(
            log_dir=dirs['logs'] / model_name,
            histogram_freq=1,
            write_graph=True,
            write_images=True
        )
    ]

    print(f"✓ Created {len(callbacks)} callbacks for training")
    return callbacks

callbacks_simple = create_callbacks('simple_cnn', patience=15)

19.1.5 Train Simple CNN

def train_model(model, X_train, y_train, X_val, y_val,
               callbacks, epochs=50, batch_size=128):
    """
    Train model

    Parameters:
        model: Keras model
        X_train, y_train: training data
        X_val, y_val: validation data
        callbacks: list of callbacks
        epochs: jumlah epochs
        batch_size: batch size

    Returns:
        history: training history
    """
    print(f"\nTraining {model.name}...")
    print(f"  Epochs: {epochs}")
    print(f"  Batch size: {batch_size}")
    print(f"  Training samples: {len(X_train):,}")
    print(f"  Validation samples: {len(X_val):,}")
    print("=" * 70)

    history = model.fit(
        X_train, y_train,
        batch_size=batch_size,
        epochs=epochs,
        validation_data=(X_val, y_val),
        callbacks=callbacks,
        verbose=1
    )

    print("\n✓ Training complete!")
    return history

# Train simple CNN
history_simple = train_model(
    simple_cnn, X_train, y_train, X_val, y_val,
    callbacks_simple, epochs=EPOCHS, batch_size=BATCH_SIZE
)

19.1.6 Plot Training History

def plot_training_history(history, model_name, save_path=None):
    """Plot training dan validation metrics"""

    fig, axes = plt.subplots(1, 2, figsize=(15, 5))

    # Plot accuracy
    axes[0].plot(history.history['accuracy'], label='Train Accuracy', linewidth=2)
    axes[0].plot(history.history['val_accuracy'], label='Val Accuracy', linewidth=2)
    axes[0].set_xlabel('Epoch', fontsize=12, fontweight='bold')
    axes[0].set_ylabel('Accuracy', fontsize=12, fontweight='bold')
    axes[0].set_title(f'{model_name} - Accuracy', fontsize=14, fontweight='bold')
    axes[0].legend(fontsize=10)
    axes[0].grid(alpha=0.3)

    # Plot loss
    axes[1].plot(history.history['loss'], label='Train Loss', linewidth=2)
    axes[1].plot(history.history['val_loss'], label='Val Loss', linewidth=2)
    axes[1].set_xlabel('Epoch', fontsize=12, fontweight='bold')
    axes[1].set_ylabel('Loss', fontsize=12, fontweight='bold')
    axes[1].set_title(f'{model_name} - Loss', fontsize=14, fontweight='bold')
    axes[1].legend(fontsize=10)
    axes[1].grid(alpha=0.3)

    plt.tight_layout()

    if save_path:
        plt.savefig(save_path, dpi=300, bbox_inches='tight')
        print(f"✓ Figure saved to: {save_path}")

    plt.show()

    # Print best metrics
    best_epoch = np.argmax(history.history['val_accuracy'])
    print(f"\nBest Performance at Epoch {best_epoch + 1}:")
    print(f"  Train Accuracy: {history.history['accuracy'][best_epoch]:.4f}")
    print(f"  Val Accuracy: {history.history['val_accuracy'][best_epoch]:.4f}")
    print(f"  Train Loss: {history.history['loss'][best_epoch]:.4f}")
    print(f"  Val Loss: {history.history['val_loss'][best_epoch]:.4f}")

plot_training_history(history_simple, 'Simple CNN',
                     save_path=dirs['figures'] / 'simple_cnn_history.png')

19.1.7 Evaluate Simple CNN

def evaluate_model(model, X_test, y_test, model_name='Model'):
    """
    Evaluate model pada test set

    Parameters:
        model: trained model
        X_test: test images
        y_test: test labels (one-hot)
        model_name: nama model

    Returns:
        test_loss, test_accuracy
    """
    print(f"\nEvaluating {model_name} on test set...")
    print("=" * 70)

    # Evaluate
    results = model.evaluate(X_test, y_test, verbose=1)

    print(f"\n{model_name} Test Results:")
    print(f"  Test Loss: {results[0]:.4f}")
    print(f"  Test Accuracy: {results[1]:.4f}")
    print(f"  Top-3 Accuracy: {results[2]:.4f}")

    return results

# Evaluate simple CNN
results_simple = evaluate_model(simple_cnn, X_test_norm, y_test_encoded,
                               'Simple CNN')

19.2 VGG-Style CNN

Sekarang kita build CNN yang lebih dalam dengan arsitektur mirip VGG.

19.2.1 Build VGG-Style Model

def build_vgg_style_cnn(input_shape=(32, 32, 3), num_classes=10):
    """
    Build VGG-style CNN dengan multiple conv layers per block

    Architecture:
        2x Conv(64) -> MaxPool ->
        2x Conv(128) -> MaxPool ->
        3x Conv(256) -> MaxPool ->
        FC(512) -> FC(256) -> Output

    Parameters:
        input_shape: shape gambar input
        num_classes: jumlah kelas output

    Returns:
        model: Keras model
    """
    model = models.Sequential(name='VGG_Style_CNN')

    # Block 1: 2x Conv(64) -> MaxPool
    model.add(layers.Conv2D(64, (3, 3), activation='relu',
                           padding='same', input_shape=input_shape,
                           name='conv1_1'))
    model.add(layers.BatchNormalization(name='bn1_1'))
    model.add(layers.Conv2D(64, (3, 3), activation='relu',
                           padding='same', name='conv1_2'))
    model.add(layers.BatchNormalization(name='bn1_2'))
    model.add(layers.MaxPooling2D((2, 2), name='pool1'))
    model.add(layers.Dropout(0.25, name='dropout1'))

    # Block 2: 2x Conv(128) -> MaxPool
    model.add(layers.Conv2D(128, (3, 3), activation='relu',
                           padding='same', name='conv2_1'))
    model.add(layers.BatchNormalization(name='bn2_1'))
    model.add(layers.Conv2D(128, (3, 3), activation='relu',
                           padding='same', name='conv2_2'))
    model.add(layers.BatchNormalization(name='bn2_2'))
    model.add(layers.MaxPooling2D((2, 2), name='pool2'))
    model.add(layers.Dropout(0.25, name='dropout2'))

    # Block 3: 3x Conv(256) -> MaxPool
    model.add(layers.Conv2D(256, (3, 3), activation='relu',
                           padding='same', name='conv3_1'))
    model.add(layers.BatchNormalization(name='bn3_1'))
    model.add(layers.Conv2D(256, (3, 3), activation='relu',
                           padding='same', name='conv3_2'))
    model.add(layers.BatchNormalization(name='bn3_2'))
    model.add(layers.Conv2D(256, (3, 3), activation='relu',
                           padding='same', name='conv3_3'))
    model.add(layers.BatchNormalization(name='bn3_3'))
    model.add(layers.MaxPooling2D((2, 2), name='pool3'))
    model.add(layers.Dropout(0.25, name='dropout3'))

    # Fully connected layers
    model.add(layers.Flatten(name='flatten'))
    model.add(layers.Dense(512, activation='relu', name='fc1'))
    model.add(layers.BatchNormalization(name='bn_fc1'))
    model.add(layers.Dropout(0.5, name='dropout_fc1'))
    model.add(layers.Dense(256, activation='relu', name='fc2'))
    model.add(layers.BatchNormalization(name='bn_fc2'))
    model.add(layers.Dropout(0.5, name='dropout_fc2'))
    model.add(layers.Dense(num_classes, activation='softmax', name='output'))

    return model

# Build VGG-style model
vgg_style = build_vgg_style_cnn()
vgg_style.summary()

19.2.2 Compile dan Train VGG-Style

# Compile model
compile_model(vgg_style, learning_rate=0.001)

# Create callbacks
callbacks_vgg = create_callbacks('vgg_style_cnn', patience=15)

# Train model
history_vgg = train_model(
    vgg_style, X_train, y_train, X_val, y_val,
    callbacks_vgg, epochs=EPOCHS, batch_size=BATCH_SIZE
)

# Plot history
plot_training_history(history_vgg, 'VGG-Style CNN',
                     save_path=dirs['figures'] / 'vgg_style_history.png')

# Evaluate
results_vgg = evaluate_model(vgg_style, X_test_norm, y_test_encoded,
                            'VGG-Style CNN')

19.3 Data Augmentation

Data augmentation dapat meningkatkan generalisasi model dengan membuat variasi data training.

19.3.1 Setup Data Augmentation

def create_data_generator(augmentation=True):
    """
    Create ImageDataGenerator untuk data augmentation

    Parameters:
        augmentation: True untuk enable augmentation

    Returns:
        datagen: ImageDataGenerator
    """
    if augmentation:
        datagen = ImageDataGenerator(
            rotation_range=15,           # Rotasi random ±15 derajat
            width_shift_range=0.1,       # Shift horizontal 10%
            height_shift_range=0.1,      # Shift vertical 10%
            horizontal_flip=True,        # Flip horizontal random
            zoom_range=0.1,              # Zoom in/out 10%
            fill_mode='nearest'          # Fill pixels yang kosong
        )
        print("✓ Data augmentation enabled:")
        print("  - Rotation: ±15°")
        print("  - Width/Height shift: 10%")
        print("  - Horizontal flip: Yes")
        print("  - Zoom: ±10%")
    else:
        datagen = ImageDataGenerator()
        print("✓ No data augmentation (plain generator)")

    return datagen

# Create augmented data generator
train_datagen = create_data_generator(augmentation=True)
train_datagen.fit(X_train)

19.3.2 Visualisasi Augmented Images

def plot_augmented_images(X, y, datagen, num_samples=5, save_path=None):
    """
    Visualisasi hasil data augmentation

    Parameters:
        X: gambar original
        y: labels
        datagen: ImageDataGenerator
        num_samples: jumlah sample
        save_path: path untuk save figure
    """
    # Pilih satu gambar random
    idx = np.random.randint(0, len(X))
    image = X[idx]
    label = np.argmax(y[idx])

    # Generate augmented versions
    image_batch = np.expand_dims(image, 0)

    fig, axes = plt.subplots(2, num_samples, figsize=(15, 6))
    fig.suptitle(f'Data Augmentation Examples - Class: {CLASS_NAMES[label]}',
                fontsize=14, fontweight='bold')

    # Original image in first row
    for i in range(num_samples):
        axes[0, i].imshow(image)
        axes[0, i].set_title('Original' if i == num_samples//2 else '',
                            fontsize=10, fontweight='bold')
        axes[0, i].axis('off')

    # Augmented images in second row
    aug_iter = datagen.flow(image_batch, batch_size=1)
    for i in range(num_samples):
        aug_image = next(aug_iter)[0]
        axes[1, i].imshow(aug_image)
        axes[1, i].set_title('Augmented' if i == num_samples//2 else '',
                            fontsize=10, fontweight='bold')
        axes[1, i].axis('off')

    plt.tight_layout()

    if save_path:
        plt.savefig(save_path, dpi=300, bbox_inches='tight')
        print(f"✓ Figure saved to: {save_path}")

    plt.show()

plot_augmented_images(X_train, y_train, train_datagen, num_samples=5,
                     save_path=dirs['figures'] / 'data_augmentation.png')

19.3.3 Train with Data Augmentation

# Build new model untuk training dengan augmentation
vgg_aug = build_vgg_style_cnn()
compile_model(vgg_aug, learning_rate=0.001)

# Create callbacks
callbacks_vgg_aug = create_callbacks('vgg_style_augmented', patience=15)

# Train dengan data generator
print("\nTraining VGG-Style CNN with Data Augmentation...")
print("=" * 70)

history_vgg_aug = vgg_aug.fit(
    train_datagen.flow(X_train, y_train, batch_size=BATCH_SIZE),
    steps_per_epoch=len(X_train) // BATCH_SIZE,
    epochs=EPOCHS,
    validation_data=(X_val, y_val),
    callbacks=callbacks_vgg_aug,
    verbose=1
)

print("\n✓ Training complete!")

# Plot history
plot_training_history(history_vgg_aug, 'VGG-Style CNN (Augmented)',
                     save_path=dirs['figures'] / 'vgg_aug_history.png')

# Evaluate
results_vgg_aug = evaluate_model(vgg_aug, X_test_norm, y_test_encoded,
                                'VGG-Style CNN (Augmented)')

19.4 Model Comparison

def compare_models(models_dict):
    """
    Compare performance dari multiple models

    Parameters:
        models_dict: dictionary {model_name: results}
    """
    print("\n" + "=" * 70)
    print("MODEL COMPARISON SUMMARY")
    print("=" * 70)

    comparison_data = []

    for model_name, results in models_dict.items():
        comparison_data.append({
            'Model': model_name,
            'Test Loss': results[0],
            'Test Accuracy': results[1],
            'Top-3 Accuracy': results[2]
        })

    df_comparison = pd.DataFrame(comparison_data)
    df_comparison = df_comparison.sort_values('Test Accuracy', ascending=False)

    print("\n", df_comparison.to_string(index=False))

    # Plot comparison
    fig, ax = plt.subplots(figsize=(12, 6))

    x = np.arange(len(df_comparison))
    width = 0.35

    bars1 = ax.bar(x - width/2, df_comparison['Test Accuracy'], width,
                   label='Test Accuracy', color='steelblue', alpha=0.8)
    bars2 = ax.bar(x + width/2, df_comparison['Top-3 Accuracy'], width,
                   label='Top-3 Accuracy', color='coral', alpha=0.8)

    ax.set_xlabel('Model', fontsize=12, fontweight='bold')
    ax.set_ylabel('Accuracy', fontsize=12, fontweight='bold')
    ax.set_title('Model Performance Comparison', fontsize=14, fontweight='bold')
    ax.set_xticks(x)
    ax.set_xticklabels(df_comparison['Model'], rotation=45, ha='right')
    ax.legend()
    ax.grid(axis='y', alpha=0.3)

    # Add value labels on bars
    for bars in [bars1, bars2]:
        for bar in bars:
            height = bar.get_height()
            ax.text(bar.get_x() + bar.get_width()/2., height,
                   f'{height:.3f}',
                   ha='center', va='bottom', fontsize=9, fontweight='bold')

    plt.tight_layout()
    plt.savefig(dirs['figures'] / 'model_comparison.png', dpi=300, bbox_inches='tight')
    plt.show()

    print("\n" + "=" * 70)

# Compare all models dari Part 2
models_comparison = {
    'Simple CNN': results_simple,
    'VGG-Style CNN': results_vgg,
    'VGG-Style (Aug)': results_vgg_aug
}

compare_models(models_comparison)

20 Part 3: Transfer Learning - Feature Extraction

Transfer learning memanfaatkan model yang sudah di-train pada dataset besar (ImageNet) untuk task kita.

20.1 Load Pre-trained VGG16

20.1.1 Setup VGG16 Base

def load_pretrained_vgg16(input_shape=(32, 32, 3), trainable=False):
    """
    Load VGG16 pre-trained pada ImageNet

    Parameters:
        input_shape: shape input gambar
        trainable: freeze atau unfreeze base layers

    Returns:
        base_model: VGG16 base model
    """
    print("Loading VGG16 pre-trained model...")

    # Load VGG16 tanpa top layers (classifier)
    base_model = VGG16(
        include_top=False,
        weights='imagenet',
        input_shape=input_shape
    )

    # Freeze base model layers
    base_model.trainable = trainable

    print(f"✓ VGG16 loaded!")
    print(f"  Total layers: {len(base_model.layers)}")
    print(f"  Trainable: {trainable}")
    print(f"  Input shape: {input_shape}")
    print(f"  Output shape: {base_model.output_shape}")

    return base_model

# Load VGG16 base (frozen)
vgg16_base = load_pretrained_vgg16(trainable=False)
vgg16_base.summary()

20.1.2 Build Transfer Learning Model

def build_transfer_learning_model(base_model, num_classes=10, model_name='TransferLearning'):
    """
    Build model dengan pre-trained base dan custom classifier

    Parameters:
        base_model: pre-trained base model
        num_classes: jumlah kelas output
        model_name: nama model

    Returns:
        model: complete model
    """
    print(f"\nBuilding {model_name} model...")

    # Create Sequential model
    model = models.Sequential(name=model_name)

    # Add pre-trained base
    model.add(base_model)

    # Add custom classifier
    model.add(layers.Flatten(name='flatten'))
    model.add(layers.Dense(512, activation='relu', name='fc1'))
    model.add(layers.BatchNormalization(name='bn1'))
    model.add(layers.Dropout(0.5, name='dropout1'))
    model.add(layers.Dense(256, activation='relu', name='fc2'))
    model.add(layers.BatchNormalization(name='bn2'))
    model.add(layers.Dropout(0.5, name='dropout2'))
    model.add(layers.Dense(num_classes, activation='softmax', name='output'))

    print(f"✓ {model_name} model built!")
    print(f"  Total layers: {len(model.layers)}")

    return model

# Build transfer learning model dengan VGG16
vgg16_tl = build_transfer_learning_model(vgg16_base, model_name='VGG16_FeatureExtraction')
vgg16_tl.summary()

20.1.3 Count Trainable Parameters

def print_trainable_parameters(model):
    """Print jumlah trainable dan non-trainable parameters"""

    trainable_count = np.sum([keras.backend.count_params(w) for w in model.trainable_weights])
    non_trainable_count = np.sum([keras.backend.count_params(w) for w in model.non_trainable_weights])

    print("\n" + "=" * 70)
    print("MODEL PARAMETERS")
    print("=" * 70)
    print(f"Trainable parameters: {trainable_count:,}")
    print(f"Non-trainable parameters: {non_trainable_count:,}")
    print(f"Total parameters: {trainable_count + non_trainable_count:,}")
    print(f"Trainable ratio: {trainable_count/(trainable_count + non_trainable_count)*100:.2f}%")
    print("=" * 70)

print_trainable_parameters(vgg16_tl)

20.1.4 Train VGG16 Feature Extraction

# Compile model
compile_model(vgg16_tl, learning_rate=0.001)

# Create callbacks
callbacks_vgg16_tl = create_callbacks('vgg16_feature_extraction', patience=15)

# Train model dengan data augmentation
print("\nTraining VGG16 Feature Extraction...")
print("=" * 70)

history_vgg16_tl = vgg16_tl.fit(
    train_datagen.flow(X_train, y_train, batch_size=BATCH_SIZE),
    steps_per_epoch=len(X_train) // BATCH_SIZE,
    epochs=EPOCHS,
    validation_data=(X_val, y_val),
    callbacks=callbacks_vgg16_tl,
    verbose=1
)

print("\n✓ Training complete!")

# Plot history
plot_training_history(history_vgg16_tl, 'VGG16 Feature Extraction',
                     save_path=dirs['figures'] / 'vgg16_tl_history.png')

# Evaluate
results_vgg16_tl = evaluate_model(vgg16_tl, X_test_norm, y_test_encoded,
                                 'VGG16 Feature Extraction')

20.2 Load Pre-trained ResNet50

20.2.1 Setup ResNet50 Base

def load_pretrained_resnet50(input_shape=(32, 32, 3), trainable=False):
    """
    Load ResNet50 pre-trained pada ImageNet

    Parameters:
        input_shape: shape input gambar
        trainable: freeze atau unfreeze base layers

    Returns:
        base_model: ResNet50 base model
    """
    print("Loading ResNet50 pre-trained model...")

    # Load ResNet50 tanpa top layers
    base_model = ResNet50(
        include_top=False,
        weights='imagenet',
        input_shape=input_shape
    )

    # Freeze base model layers
    base_model.trainable = trainable

    print(f"✓ ResNet50 loaded!")
    print(f"  Total layers: {len(base_model.layers)}")
    print(f"  Trainable: {trainable}")
    print(f"  Input shape: {input_shape}")
    print(f"  Output shape: {base_model.output_shape}")

    return base_model

# Load ResNet50 base (frozen)
resnet50_base = load_pretrained_resnet50(trainable=False)

20.2.2 Build and Train ResNet50

# Build transfer learning model dengan ResNet50
resnet50_tl = build_transfer_learning_model(resnet50_base,
                                           model_name='ResNet50_FeatureExtraction')

# Print parameters
print_trainable_parameters(resnet50_tl)

# Compile model
compile_model(resnet50_tl, learning_rate=0.001)

# Create callbacks
callbacks_resnet50_tl = create_callbacks('resnet50_feature_extraction', patience=15)

# Train model
history_resnet50_tl = resnet50_tl.fit(
    train_datagen.flow(X_train, y_train, batch_size=BATCH_SIZE),
    steps_per_epoch=len(X_train) // BATCH_SIZE,
    epochs=EPOCHS,
    validation_data=(X_val, y_val),
    callbacks=callbacks_resnet50_tl,
    verbose=1
)

# Plot history
plot_training_history(history_resnet50_tl, 'ResNet50 Feature Extraction',
                     save_path=dirs['figures'] / 'resnet50_tl_history.png')

# Evaluate
results_resnet50_tl = evaluate_model(resnet50_tl, X_test_norm, y_test_encoded,
                                    'ResNet50 Feature Extraction')

21 Part 4: Transfer Learning - Fine-tuning

Fine-tuning melibatkan unfreezing beberapa top layers dari base model dan melatihnya dengan learning rate rendah.

21.1 VGG16 Fine-tuning

21.1.1 Unfreeze Top Layers

def unfreeze_top_layers(model, base_model, num_layers_to_unfreeze=4):
    """
    Unfreeze top N layers dari base model

    Parameters:
        model: complete model
        base_model: base model inside complete model
        num_layers_to_unfreeze: jumlah top layers yang di-unfreeze

    Returns:
        model: model dengan unfrozen layers
    """
    print(f"\nUnfreezing top {num_layers_to_unfreeze} layers...")

    # Freeze semua layers dulu
    base_model.trainable = True

    # Freeze semua kecuali top N layers
    total_layers = len(base_model.layers)
    freeze_until = total_layers - num_layers_to_unfreeze

    for layer in base_model.layers[:freeze_until]:
        layer.trainable = False

    for layer in base_model.layers[freeze_until:]:
        layer.trainable = True

    print(f"✓ Layers configuration:")
    print(f"  Total base layers: {total_layers}")
    print(f"  Frozen layers: {freeze_until}")
    print(f"  Trainable layers: {num_layers_to_unfreeze}")

    # Print trainable layers
    print(f"\n  Trainable layers:")
    for i, layer in enumerate(base_model.layers[freeze_until:]):
        print(f"    {freeze_until + i}: {layer.name}")

    return model

# Load best VGG16 feature extraction model
vgg16_ft = keras.models.load_model(dirs['checkpoints'] / 'vgg16_feature_extraction_best.h5')

# Unfreeze top 4 layers
vgg16_ft = unfreeze_top_layers(vgg16_ft, vgg16_ft.layers[0], num_layers_to_unfreeze=4)

# Print parameters after unfreezing
print_trainable_parameters(vgg16_ft)

21.1.2 Fine-tune with Lower Learning Rate

# Compile dengan learning rate lebih rendah
compile_model(vgg16_ft, learning_rate=0.0001)  # 10x lebih kecil

# Create callbacks
callbacks_vgg16_ft = create_callbacks('vgg16_finetuned', patience=15)

# Fine-tune model
print("\nFine-tuning VGG16...")
print("=" * 70)

history_vgg16_ft = vgg16_ft.fit(
    train_datagen.flow(X_train, y_train, batch_size=BATCH_SIZE),
    steps_per_epoch=len(X_train) // BATCH_SIZE,
    epochs=30,  # Fewer epochs untuk fine-tuning
    validation_data=(X_val, y_val),
    callbacks=callbacks_vgg16_ft,
    verbose=1
)

print("\n✓ Fine-tuning complete!")

# Plot history
plot_training_history(history_vgg16_ft, 'VGG16 Fine-tuned',
                     save_path=dirs['figures'] / 'vgg16_ft_history.png')

# Evaluate
results_vgg16_ft = evaluate_model(vgg16_ft, X_test_norm, y_test_encoded,
                                 'VGG16 Fine-tuned')

21.2 ResNet50 Fine-tuning

# Load best ResNet50 feature extraction model
resnet50_ft = keras.models.load_model(dirs['checkpoints'] / 'resnet50_feature_extraction_best.h5')

# Unfreeze top 10 layers (ResNet lebih dalam)
resnet50_ft = unfreeze_top_layers(resnet50_ft, resnet50_ft.layers[0],
                                 num_layers_to_unfreeze=10)

# Print parameters
print_trainable_parameters(resnet50_ft)

# Compile dengan learning rate rendah
compile_model(resnet50_ft, learning_rate=0.0001)

# Create callbacks
callbacks_resnet50_ft = create_callbacks('resnet50_finetuned', patience=15)

# Fine-tune model
history_resnet50_ft = resnet50_ft.fit(
    train_datagen.flow(X_train, y_train, batch_size=BATCH_SIZE),
    steps_per_epoch=len(X_train) // BATCH_SIZE,
    epochs=30,
    validation_data=(X_val, y_val),
    callbacks=callbacks_resnet50_ft,
    verbose=1
)

# Plot history
plot_training_history(history_resnet50_ft, 'ResNet50 Fine-tuned',
                     save_path=dirs['figures'] / 'resnet50_ft_history.png')

# Evaluate
results_resnet50_ft = evaluate_model(resnet50_ft, X_test_norm, y_test_encoded,
                                    'ResNet50 Fine-tuned')

21.3 Progressive Unfreezing

Progressive unfreezing adalah teknik di mana kita secara bertahap unfreeze lebih banyak layers.

def progressive_unfreezing(model, base_model, stages=[2, 4, 8],
                          epochs_per_stage=10, initial_lr=0.0001):
    """
    Progressive unfreezing strategy

    Parameters:
        model: complete model
        base_model: base model
        stages: list jumlah layers yang di-unfreeze per stage
        epochs_per_stage: epochs untuk setiap stage
        initial_lr: initial learning rate

    Returns:
        histories: list of training histories
    """
    histories = []

    print("\n" + "=" * 70)
    print("PROGRESSIVE UNFREEZING")
    print("=" * 70)

    for stage, num_layers in enumerate(stages, 1):
        print(f"\n{'='*70}")
        print(f"STAGE {stage}: Unfreezing top {num_layers} layers")
        print(f"{'='*70}")

        # Unfreeze layers
        model = unfreeze_top_layers(model, base_model, num_layers)

        # Compile dengan learning rate yang menurun
        lr = initial_lr / (stage ** 0.5)  # Decrease LR setiap stage
        compile_model(model, learning_rate=lr)

        # Create callbacks
        callbacks = create_callbacks(f'progressive_stage{stage}', patience=5)

        # Train
        history = model.fit(
            train_datagen.flow(X_train, y_train, batch_size=BATCH_SIZE),
            steps_per_epoch=len(X_train) // BATCH_SIZE,
            epochs=epochs_per_stage,
            validation_data=(X_val, y_val),
            callbacks=callbacks,
            verbose=1
        )

        histories.append(history)

        # Evaluate after stage
        results = model.evaluate(X_val, y_val, verbose=0)
        print(f"\nStage {stage} Results:")
        print(f"  Val Accuracy: {results[1]:.4f}")

    print("\n" + "=" * 70)
    print("PROGRESSIVE UNFREEZING COMPLETE")
    print("=" * 70)

    return histories

# Try progressive unfreezing pada VGG16
vgg16_prog = keras.models.load_model(dirs['checkpoints'] / 'vgg16_feature_extraction_best.h5')

histories_progressive = progressive_unfreezing(
    vgg16_prog, vgg16_prog.layers[0],
    stages=[2, 4, 6],
    epochs_per_stage=10,
    initial_lr=0.0001
)

# Evaluate final model
results_vgg16_prog = evaluate_model(vgg16_prog, X_test_norm, y_test_encoded,
                                   'VGG16 Progressive Unfreezing')

22 Part 5: Advanced Techniques

22.1 Model Ensemble

Ensemble beberapa model untuk meningkatkan akurasi.

def create_ensemble_predictions(models, X_test, weights=None):
    """
    Create ensemble predictions dari multiple models

    Parameters:
        models: list of trained models
        X_test: test data
        weights: optional weights untuk setiap model

    Returns:
        ensemble_predictions: weighted average predictions
    """
    print("Creating ensemble predictions...")

    if weights is None:
        weights = [1.0 / len(models)] * len(models)

    # Normalize weights
    weights = np.array(weights) / np.sum(weights)

    # Get predictions dari setiap model
    all_predictions = []
    for i, model in enumerate(models):
        print(f"  Getting predictions from model {i+1}...")
        preds = model.predict(X_test, verbose=0)
        all_predictions.append(preds)

    # Weighted average
    ensemble_preds = np.zeros_like(all_predictions[0])
    for preds, weight in zip(all_predictions, weights):
        ensemble_preds += preds * weight

    print(f"✓ Ensemble predictions created!")
    print(f"  Models: {len(models)}")
    print(f"  Weights: {weights}")

    return ensemble_preds

# Create ensemble dari best models
ensemble_models = [vgg16_ft, resnet50_ft]
ensemble_weights = [0.5, 0.5]  # Equal weights

ensemble_preds = create_ensemble_predictions(ensemble_models, X_test_norm,
                                            weights=ensemble_weights)

# Evaluate ensemble
y_test_labels = np.argmax(y_test_encoded, axis=1)
ensemble_pred_labels = np.argmax(ensemble_preds, axis=1)
ensemble_accuracy = accuracy_score(y_test_labels, ensemble_pred_labels)

print(f"\nEnsemble Test Accuracy: {ensemble_accuracy:.4f}")

22.2 Confusion Matrix Analysis

def plot_confusion_matrix(y_true, y_pred, class_names, model_name='Model',
                         save_path=None):
    """
    Plot confusion matrix

    Parameters:
        y_true: true labels
        y_pred: predicted labels
        class_names: list of class names
        model_name: nama model
        save_path: path untuk save figure
    """
    # Compute confusion matrix
    cm = confusion_matrix(y_true, y_pred)

    # Normalize
    cm_norm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]

    # Plot
    fig, axes = plt.subplots(1, 2, figsize=(18, 7))

    # Plot absolute values
    sns.heatmap(cm, annot=True, fmt='d', cmap='Blues',
               xticklabels=class_names, yticklabels=class_names,
               ax=axes[0], cbar_kws={'label': 'Count'})
    axes[0].set_xlabel('Predicted Label', fontsize=12, fontweight='bold')
    axes[0].set_ylabel('True Label', fontsize=12, fontweight='bold')
    axes[0].set_title(f'{model_name} - Confusion Matrix (Counts)',
                     fontsize=14, fontweight='bold')

    # Plot normalized values
    sns.heatmap(cm_norm, annot=True, fmt='.2f', cmap='Greens',
               xticklabels=class_names, yticklabels=class_names,
               ax=axes[1], cbar_kws={'label': 'Proportion'})
    axes[1].set_xlabel('Predicted Label', fontsize=12, fontweight='bold')
    axes[1].set_ylabel('True Label', fontsize=12, fontweight='bold')
    axes[1].set_title(f'{model_name} - Confusion Matrix (Normalized)',
                     fontsize=14, fontweight='bold')

    plt.tight_layout()

    if save_path:
        plt.savefig(save_path, dpi=300, bbox_inches='tight')
        print(f"✓ Figure saved to: {save_path}")

    plt.show()

    # Print classification report
    print(f"\n{model_name} - Classification Report:")
    print("=" * 70)
    print(classification_report(y_true, y_pred, target_names=class_names))

# Plot confusion matrix untuk ensemble model
plot_confusion_matrix(y_test_labels, ensemble_pred_labels, CLASS_NAMES,
                     model_name='Ensemble Model',
                     save_path=dirs['figures'] / 'ensemble_confusion_matrix.png')

22.3 Grad-CAM Visualization

Grad-CAM (Gradient-weighted Class Activation Mapping) memvisualisasikan bagian mana dari gambar yang penting untuk prediksi.

def make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=None):
    """
    Generate Grad-CAM heatmap

    Parameters:
        img_array: input image (preprocessed)
        model: trained model
        last_conv_layer_name: nama last convolutional layer
        pred_index: class index untuk visualisasi (None = predicted class)

    Returns:
        heatmap: Grad-CAM heatmap
    """
    # Create model that maps input to last conv layer dan predictions
    grad_model = keras.models.Model(
        [model.inputs],
        [model.get_layer(last_conv_layer_name).output, model.output]
    )

    # Compute gradient
    with tf.GradientTape() as tape:
        conv_outputs, predictions = grad_model(img_array)
        if pred_index is None:
            pred_index = tf.argmax(predictions[0])
        class_channel = predictions[:, pred_index]

    # Gradient dari predicted class terhadap output feature map
    grads = tape.gradient(class_channel, conv_outputs)

    # Pooled gradients
    pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))

    # Weighted combination
    conv_outputs = conv_outputs[0]
    heatmap = conv_outputs @ pooled_grads[..., tf.newaxis]
    heatmap = tf.squeeze(heatmap)

    # Normalize heatmap
    heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap)

    return heatmap.numpy()

def plot_gradcam(img, heatmap, alpha=0.4):
    """
    Overlay Grad-CAM heatmap pada gambar original

    Parameters:
        img: original image
        heatmap: Grad-CAM heatmap
        alpha: transparency level

    Returns:
        superimposed_img: image dengan heatmap overlay
    """
    # Resize heatmap ke ukuran gambar
    heatmap = cv2.resize(heatmap, (img.shape[1], img.shape[0]))

    # Convert heatmap ke RGB
    heatmap = np.uint8(255 * heatmap)
    heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)

    # Convert BGR to RGB
    heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB)

    # Superimpose
    superimposed_img = heatmap * alpha + img * 255
    superimposed_img = np.clip(superimposed_img, 0, 255).astype('uint8')

    return superimposed_img

# Visualisasi Grad-CAM untuk beberapa sample
def visualize_gradcam_samples(model, X_test, y_test, num_samples=6,
                             last_conv_layer_name='conv5_block3_out',
                             save_path=None):
    """Visualisasi Grad-CAM untuk multiple samples"""

    # Random samples
    indices = np.random.choice(len(X_test), num_samples, replace=False)

    fig, axes = plt.subplots(num_samples, 3, figsize=(12, num_samples*3))
    fig.suptitle('Grad-CAM Visualization', fontsize=16, fontweight='bold')

    for i, idx in enumerate(indices):
        img = X_test[idx]
        true_label = np.argmax(y_test[idx])

        # Get prediction
        img_array = np.expand_dims(img, axis=0)
        preds = model.predict(img_array, verbose=0)
        pred_label = np.argmax(preds[0])
        pred_prob = preds[0][pred_label]

        # Generate Grad-CAM
        try:
            heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name)
            gradcam_img = plot_gradcam(img, heatmap)
        except:
            print(f"⚠ Grad-CAM failed for sample {i}, using placeholder")
            gradcam_img = img

        # Plot original
        axes[i, 0].imshow(img)
        axes[i, 0].set_title(f'Original\nTrue: {CLASS_NAMES[true_label]}',
                            fontsize=10)
        axes[i, 0].axis('off')

        # Plot heatmap
        axes[i, 1].imshow(heatmap, cmap='jet')
        axes[i, 1].set_title(f'Heatmap', fontsize=10)
        axes[i, 1].axis('off')

        # Plot Grad-CAM overlay
        axes[i, 2].imshow(gradcam_img)
        axes[i, 2].set_title(f'Grad-CAM\nPred: {CLASS_NAMES[pred_label]} ({pred_prob:.2f})',
                            fontsize=10)
        axes[i, 2].axis('off')

    plt.tight_layout()

    if save_path:
        plt.savefig(save_path, dpi=300, bbox_inches='tight')
        print(f"✓ Figure saved to: {save_path}")

    plt.show()

# Visualize Grad-CAM untuk ResNet50 (ResNet has specific layer names)
try:
    # Cari last conv layer name
    conv_layers = [layer.name for layer in resnet50_ft.layers[0].layers
                   if 'conv' in layer.name.lower()]
    last_conv_layer = conv_layers[-1] if conv_layers else 'conv5_block3_out'

    print(f"Using last conv layer: {last_conv_layer}")

    visualize_gradcam_samples(resnet50_ft, X_test_norm, y_test_encoded,
                             num_samples=6,
                             last_conv_layer_name=last_conv_layer,
                             save_path=dirs['figures'] / 'gradcam_visualization.png')
except Exception as e:
    print(f"⚠ Grad-CAM visualization failed: {e}")
    print("  Skipping Grad-CAM visualization")

22.4 Feature Map Visualization

def visualize_feature_maps(model, img, layer_names, save_path=None):
    """
    Visualisasi feature maps dari convolutional layers

    Parameters:
        model: trained model
        img: input image
        layer_names: list of layer names untuk visualisasi
        save_path: path untuk save figure
    """
    # Create model untuk extract feature maps
    layer_outputs = [model.get_layer(name).output for name in layer_names]
    activation_model = keras.models.Model(inputs=model.input, outputs=layer_outputs)

    # Get activations
    img_array = np.expand_dims(img, axis=0)
    activations = activation_model.predict(img_array, verbose=0)

    # Plot feature maps
    num_layers = len(layer_names)
    fig, axes = plt.subplots(num_layers + 1, 8, figsize=(20, 2.5*(num_layers + 1)))
    fig.suptitle('Feature Maps Visualization', fontsize=16, fontweight='bold')

    # Plot original image
    for j in range(8):
        axes[0, j].imshow(img)
        if j == 0:
            axes[0, j].set_ylabel('Original', fontsize=10, fontweight='bold')
        axes[0, j].axis('off')

    # Plot feature maps untuk setiap layer
    for i, (layer_name, activation) in enumerate(zip(layer_names, activations), 1):
        num_features = min(8, activation.shape[-1])

        for j in range(8):
            if j < num_features:
                feature_map = activation[0, :, :, j]
                axes[i, j].imshow(feature_map, cmap='viridis')
            else:
                axes[i, j].axis('off')

            if j == 0:
                axes[i, j].set_ylabel(layer_name, fontsize=8, fontweight='bold')
            axes[i, j].set_xticks([])
            axes[i, j].set_yticks([])

    plt.tight_layout()

    if save_path:
        plt.savefig(save_path, dpi=300, bbox_inches='tight')
        print(f"✓ Figure saved to: {save_path}")

    plt.show()

# Visualize feature maps dari Simple CNN
try:
    sample_idx = np.random.randint(0, len(X_test_norm))
    sample_img = X_test_norm[sample_idx]

    # Select beberapa conv layers
    conv_layer_names = ['conv1', 'conv2', 'conv3']

    visualize_feature_maps(simple_cnn, sample_img, conv_layer_names,
                          save_path=dirs['figures'] / 'feature_maps.png')
except Exception as e:
    print(f"⚠ Feature map visualization failed: {e}")

22.5 Final Model Comparison

# Compile all results
all_models_comparison = {
    'Simple CNN': results_simple,
    'VGG-Style CNN': results_vgg,
    'VGG-Style (Aug)': results_vgg_aug,
    'VGG16 Feature Ext': results_vgg16_tl,
    'ResNet50 Feature Ext': results_resnet50_tl,
    'VGG16 Fine-tuned': results_vgg16_ft,
    'ResNet50 Fine-tuned': results_resnet50_ft,
    'VGG16 Progressive': results_vgg16_prog
}

# Add ensemble results
all_models_comparison['Ensemble'] = [
    0.0,  # Loss not directly available
    ensemble_accuracy,
    0.0   # Top-3 not calculated
]

# Compare all models
compare_models(all_models_comparison)

23 Kesimpulan

23.1 Ringkasan Pembelajaran

Pada lab ini, Anda telah mempelajari:

  1. CNN Architecture: Membangun CNN dari scratch dengan berbagai kompleksitas
  2. Data Augmentation: Meningkatkan generalisasi dengan augmentasi data
  3. Transfer Learning: Memanfaatkan pre-trained models (VGG16, ResNet50)
  4. Fine-tuning: Mengoptimalkan pre-trained models untuk task spesifik
  5. Advanced Techniques: Ensemble, Grad-CAM, dan visualisasi
  6. Model Comparison: Membandingkan berbagai pendekatan

23.2 Key Takeaways

print("=" * 70)
print("KEY TAKEAWAYS - CIFAR-10 IMAGE CLASSIFICATION")
print("=" * 70)
print("""
1. CNN ARCHITECTURE:
   - Simple CNN bisa achieve ~70-75% accuracy
   - Deeper networks (VGG-style) improve ke ~78-82%
   - Batch normalization dan dropout penting untuk regularization

2. DATA AUGMENTATION:
   - Meningkatkan accuracy ~3-5%
   - Membantu model generalize better
   - Essential untuk dataset kecil-medium

3. TRANSFER LEARNING:
   - Feature extraction: ~80-85% accuracy dengan training minimal
   - Pre-trained models sudah punya feature detectors yang bagus
   - Jauh lebih cepat daripada training from scratch

4. FINE-TUNING:
   - Meningkatkan accuracy ~2-5% dari feature extraction
   - Butuh learning rate yang lebih kecil
   - Progressive unfreezing lebih stable

5. ENSEMBLE:
   - Combining models bisa boost accuracy 1-3%
   - Trade-off: lebih akurat tapi lebih lambat
   - Best untuk production systems

6. BEST PRACTICES:
   - Start simple, iterate gradually
   - Always use validation set
   - Monitor overfitting
   - Save best models
   - Visualize to understand
""")
print("=" * 70)

23.3 Saran Eksperimen Lanjutan

Beberapa eksperimen yang bisa Anda coba:

  1. Architecture Variations:

    • Coba EfficientNet, DenseNet, atau MobileNet
    • Experiment dengan different layer configurations
    • Try different activation functions (Swish, GELU)
  2. Regularization Techniques:

    • L1/L2 regularization
    • Different dropout rates
    • Cutout/Mixup augmentation
  3. Optimization:

    • Different optimizers (SGD with momentum, AdamW)
    • Learning rate schedules (cosine annealing)
    • Batch size effects
  4. Advanced Transfer Learning:

    • Multi-stage fine-tuning
    • Knowledge distillation
    • Meta-learning approaches

23.4 Referensi

  • Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images
  • Simonyan, K., & Zisserman, A. (2014). Very Deep Convolutional Networks (VGG)
  • He, K., et al. (2016). Deep Residual Learning for Image Recognition (ResNet)
  • Selvaraju, R. R., et al. (2017). Grad-CAM: Visual Explanations from Deep Networks

24 Submission Guidelines

24.1 Deliverables

Kumpulkan file-file berikut:

  1. Notebook (.ipynb atau .qmd)

  2. Trained Models (best checkpoints)

  3. Figures (semua visualisasi)

  4. Report (PDF, maksimal 10 halaman) berisi:

    • Ringkasan eksperimen
    • Hasil setiap model
    • Analisis perbandingan
    • Insights dan kesimpulan

24.2 Grading Rubric

Lihat file rubric.md untuk detail penilaian.

Total Points: 100 + 10 Bonus

  • Part 1: Data Exploration (15 points)
  • Part 2: CNN from Scratch (25 points)
  • Part 3: Transfer Learning - Feature Extraction (25 points)
  • Part 4: Fine-tuning (20 points)
  • Part 5: Advanced Techniques (15 points)
  • Bonus: Achieve >85% test accuracy (+10 points)

Selamat mengerjakan! Happy Learning! 🚀