Referensi

Buku teks ini disusun berdasarkan berbagai sumber literatur akademis, dokumentasi teknis, dan praktik terbaik industri. Berikut adalah daftar referensi yang digunakan dalam penyusunan materi pembelajaran.

Buku Teks Utama

Machine Learning & Deep Learning:

  • Géron, A. (2022). Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow (3rd ed.). O’Reilly Media.

  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. http://www.deeplearningbook.org

  • Murphy, K. P. (2022). Probabilistic Machine Learning: An Introduction. MIT Press.

  • Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.

  • James, G., Witten, D., Hastie, T., & Tibshirani, R. (2021). An Introduction to Statistical Learning (2nd ed.). Springer.

Natural Language Processing & LLMs:

  • Jurafsky, D., & Martin, J. H. (2023). Speech and Language Processing (3rd ed. draft). https://web.stanford.edu/~jurafsky/slp3/

  • Tunstall, L., von Werra, L., & Wolf, T. (2022). Natural Language Processing with Transformers. O’Reilly Media.

MLOps & Production:

  • Huyen, C. (2022). Designing Machine Learning Systems. O’Reilly Media.

  • Lakshmanan, V., Robinson, S., & Munn, M. (2020). Machine Learning Design Patterns. O’Reilly Media.

Paper Penelitian Penting

Foundational Papers:

  • Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386-408.

  • Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536.

  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.

Modern Deep Learning:

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097-1105).

  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780.

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998-6008).

Large Language Models:

  • Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT (pp. 4171-4186).

  • Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. In Advances in Neural Information Processing Systems, 33, 1877-1901.

  • Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.

Retrieval-Augmented Generation:

  • Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., … & Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems, 33, 9459-9474.

Parameter-Efficient Fine-tuning:

  • Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., … & Chen, W. (2022). LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.

Ethical AI & Fairness:

  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35.

  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., … & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency (pp. 220-229).

Dokumentasi Teknis

Libraries & Frameworks:

  • Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., … & Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825-2830.

  • Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., … & Zheng, X. (2016). TensorFlow: Large-scale machine learning on heterogeneous systems. arXiv preprint arXiv:1603.04467.

  • Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., … & Chintala, S. (2019). PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (pp. 8026-8037).

  • Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., … & Rush, A. M. (2020). Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations (pp. 38-45).

Online Documentation:

  • scikit-learn Development Team. (2024). scikit-learn: Machine Learning in Python. https://scikit-learn.org/stable/

  • TensorFlow Development Team. (2024). TensorFlow Documentation. https://www.tensorflow.org/

  • PyTorch Development Team. (2024). PyTorch Documentation. https://pytorch.org/docs/

  • Hugging Face Team. (2024). Transformers Documentation. https://huggingface.co/docs/transformers/

Sumber Pembelajaran Online

Courses & Tutorials:

  • Ng, A. (2024). Machine Learning Specialization. Coursera/Stanford University.

  • Karpathy, A. (2023). Neural Networks: Zero to Hero. YouTube series. https://karpathy.ai/zero-to-hero.html

  • Fast.ai. (2024). Practical Deep Learning for Coders. https://course.fast.ai/

Technical Blogs & Communities:

  • Distill. (2016-2021). Distill: Machine Learning Research Should Be Clear, Dynamic and Vivid. https://distill.pub/

  • Hugging Face Blog. (2024). https://huggingface.co/blog

  • Papers with Code. (2024). The latest in Machine Learning. https://paperswithcode.com/

Dataset References

Benchmark Datasets:

  • Dua, D., & Graff, C. (2019). UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences. http://archive.ics.uci.edu/ml

  • Kaggle Inc. (2024). Kaggle Datasets. https://www.kaggle.com/datasets

  • ImageNet. (2009). ImageNet Large Scale Visual Recognition Challenge. https://www.image-net.org/

Specialized Datasets (digunakan dalam labs):

  • Anderson, H. S., & Roth, P. (2018). EMBER: An open dataset for training static PE malware machine learning models. arXiv preprint arXiv:1804.04637.

  • Titanic Dataset. (n.d.). Kaggle. https://www.kaggle.com/c/titanic

  • Amazon Product Reviews. (n.d.). Amazon Customer Reviews Dataset. https://s3.amazonaws.com/amazon-reviews-pds/

Tools & Platforms

Development Tools:

  • Allaire, J. J. (2024). Quarto: An open-source scientific and technical publishing system. https://quarto.org/

  • Jupyter Development Team. (2024). Project Jupyter. https://jupyter.org/

  • Visual Studio Code. (2024). https://code.visualstudio.com/

Deployment & Production:

  • Docker Inc. (2024). Docker Documentation. https://docs.docker.com/

  • Ramírez, S. (2024). FastAPI. https://fastapi.tiangolo.com/

  • Kubernetes. (2024). Kubernetes Documentation. https://kubernetes.io/docs/

Standards & Best Practices

AI Ethics & Governance:

  • European Commission. (2021). Proposal for a Regulation on Artificial Intelligence (AI Act). https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  • The White House. (2023). Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. https://www.whitehouse.gov/briefing-room/

  • Mozilla Foundation. (2024). Data & AI Ethics. https://foundation.mozilla.org/

ML Best Practices:

  • Google. (2024). Machine Learning Crash Course. https://developers.google.com/machine-learning/crash-course

  • Microsoft. (2024). Responsible AI. https://www.microsoft.com/en-us/ai/responsible-ai

  • MLOps Community. (2024). https://mlops.community/

Catatan Penggunaan

Akses Referensi

Sebagian besar referensi di atas dapat diakses secara gratis online. Untuk paper akademis yang berada di balik paywall, mahasiswa dapat mengakses melalui:

  1. Perpustakaan institusi - Akses jurnal berlangganan
  2. arXiv.org - Preprint version dari banyak paper
  3. Google Scholar - Temukan versi open access
  4. ResearchGate - Author-shared versions
  5. Sci-Hub - (Gunakan sesuai kebijakan institusi)
Update Berkala

Bidang Machine Learning berkembang sangat cepat. Referensi ini akan diperbarui secara berkala. Mahasiswa disarankan untuk:

  • Mengikuti conference terbaru (NeurIPS, ICML, CVPR, ACL, EMNLP)
  • Membaca arXiv.org secara rutin
  • Mengikuti blog teknis dari peneliti terkemuka
  • Bergabung dengan komunitas ML (Reddit r/MachineLearning, Discord servers)

Terakhir diperbarui: 7 Desember 2025 Versi: 1.0