Brew AI: Practical AI That Delivers Results

    Don't waste time on wishy-washy concepts.

    Brew AI cuts to the chase with proper hands-on coding and implementation techniques.

    Brew AI returns in September 2025. Led by John Davies (ex-Chief Architect of Visa, V.me, JP Morgan, BNP Paribas).

    Designed for:

    Developers, technical leads, and engineers looking to implement AI solutions in production environments. Ideal for those wanting to move beyond theory to practical application.

    The Watershed, Bristol
    Returning September 2025

    The Watershed

    Bristol's Cultural Cinema and Digital Creativity Centre

    Beyond Theory — Practical AI For Real Developers

    Don't waste time on wishy-washy AI concepts.

    Brew AI cuts to the chase with proper hands-on coding, live model tuning, and real-world implementation techniques that actually work.

    Led by John Davies (ex-Chief Architect of Visa and V.me), this workshop bridges theory with immediate practical application—exactly what you need to deploy AI solutions that deliver genuine business value.

    Returning September 2025 • The Watershed, Bristol

    Learn AI That Actually Delivers

    Workshop curriculum covering intensive hands-on AI coding and implementation

    Kick-Off & Introduction

    • Welcome from John Davies (ex-Chief Architect of Visa; and V.me)
    • Overview of the workshop: bridging AI & Gen-AI theory with live coding, deep technical dives, and real-world integration

    AI Fundamentals – How Transformers & Tokenisation Really Work

    • Neural networks vs. GenAI/LLMs
    • Deep dive into tokenisation, embeddings, transformers, and attention mechanisms
    • Live demo: Running LLMs (Text, Chat, Instruction, Vision, Speech) locally, remotely, and in-cloud

    Hands-On with Local, Remote & Cloud Models

    • Installing and tuning local models (e.g., Llama, Qwen, Mistral, Gemma & Qwen)
    • Live coding: Encode/decode text in Python, inspect token IDs, tune parameters (batch size, context window, temperature)
    • Code samples provided in C, C#, Java, TypeScript, Python and others
    • Examples for local, remote, and cloud deployment

    Coffee & Code Chat Break

    Prompt Engineering & Model Tuning

    • Understanding temperature, context windows, and prompt precision
    • Techniques for effective prompt design and output control
    • Live coding: Build a Python tool that outputs structured JSON for summarisation/data extraction
    • Best practices for prompt debugging and iterative refinement

    Pizza & Brews Break

    • Recharge with pizza and beers
    • Share debugging tips and optimisation hacks with fellow devs

    RAG – Retrieval-Augmented Generation

    • Chunking, embedding, parsing, and vector storage
    • RAG optimisation strategies
    • Better chunking, embedding techniques, and vector DB options

    Summarisation, Data Extraction & Sentiment Analysis

    • Reduce document size, reformat or translate content
    • Extract meaning, sentiment, and key data from raw text

    Structured Output with LLMs

    • Techniques for consistently generating structured JSON and other formats from natural language input

    Quick Break

    Code Generation Techniques

    • How code generation really works
    • Complete, insert (fill-in-the-middle), and instruct modes
    • Exploring specialist coding models

    Tool Calling & Model Context Protocol (MCP)

    • How to call tools with LLMs
    • Building and orchestrating MCPs

    Agentic Systems in Action

    • What are agentic systems?
    • A walkthrough of a basic agentic flow

    Q&A & Wrap-Up

    • Open forum for questions
    • Networking and follow-up discussions
    • Access to workshop materials and code repositories
    Practical Takeaways

    What You'll Walk Away With

    Whether you're building internal tools, experimenting with AI-driven features, or just trying to sharpen your GenAI skills, this workshop is designed to leave you with more than just theory.

    Here's exactly what you'll take home:

    Working Code Samples in Multiple Languages

    Clean, ready-to-run code examples in Python, Java, TypeScript, C#, and C, covering core AI tasks like tokenization, model calls, JSON output generation, and more.

    Templates for RAG Implementations

    Setups and code patterns for building RAG pipelines using chunked data, embeddings, vector stores, and query handling for document assistants and chatbots.

    Prompt Engineering Patterns

    Best practices for crafting effective prompts, including strategies for structure, temperature tuning, context control, and debugging to get consistent results.

    LLM Performance Optimization

    Insights and practical adjustments to improve speed, output quality, and cost-efficiency with parameter tuning, token window sizing, batching, and caching.

    Local Model Setup Instructions

    Step-by-step guidance on installing, configuring, and running local LLMs like Llama, Mistral, and Qwen for private, secure applications without vendor lock-in.

    GitHub Repo Access

    Complete access to code, slides, demos, setup scripts, prompt templates, model integration examples via a dedicated GitHub repo for continued reference.

    Reference Guides

    Hand-picked resources, cheatsheets, and technical docs to help you keep building after the event, including links to model libraries and frameworks.

    Questions & Answers

    Frequently Asked Questions

    Common questions about the Brew AI workshop