Model Context Protocol guide
How connected AI tools use external data, tool calls, and services.
Practical notes on building AI systems that survive real workflows: LLM applications, agentic automation, evaluation, retrieval, MLOps, deployment, observability, and the data platforms underneath them.
How connected AI tools use external data, tool calls, and services.
A technical guide to agentic automation systems and workflow design.
Building document processing workflows with LlamaReport and LlamaCloud.
A hands-on guide for understanding confidence and uncertainty in LLM output.
Model management and deployment foundations for production ML work.
Using Docker for reproducible data science and machine learning environments.
Distributed compute patterns for machine learning workloads.
Notes on transparency, fairness, privacy, robustness, and accountability.
Delta Lake, MLflow, Unity Catalog, and data platform implementation notes.
Container orchestration concepts and implementation patterns.
High-performance machine learning and numerical computing patterns.
Modern Python packaging, testing, documentation, and release automation.
This column is for the engineering layer between AI demos and dependable products: system design, integration, evaluation, reliability, data quality, deployment, and developer workflow. General ML theory still appears in the main blog, while this page collects the posts closest to shipping AI systems.