top of page
blabk dig.jpg

The Real Impact of the AI-DLC

David Anderson | Author: The Value Flywheel Effect

DescriptionThe way we build software is changing faster than most organisations can adapt. What Dave Anderson calls the AI Development Lifecycle (AIDLC) is at the centre of that change. This isn't about bolting new tools onto existing workflows. It's a fundamental shift in how teams think about modernisation, developer experience, and delivery at scale.

 

Dave Anderson (author of The Value Flywheel Effect) gets specific about what that shift actually means in practice. What does a healthy engineering culture look like when AI is embedded across the stack?

How do AWS services fit into a coherent AI strategy? How do you bring your team through change without losing what makes them effective? And how do you make sure you're growing with the change, not being left behind by it?

 

You'll leave with a clear mental model of where AI genuinely adds value, where the hype outpaces the reality, and concrete steps you can take back to your organisation on Monday morning. 

Level L100

David anderson pic.jpg

David Anderson

Event-driven architecture, the hard parts

Yan Cui | Independent Consultant: theburningmonk.com

In this talk, let's explore the hard parts of building and operating event-driven architectures in practice, including:

  • End-to-end observability as events hop across services

  • Testing strategies within bounded contexts

  • Evolving the event schemas without breaking consumers

  • Catching integration problems early

  • Handling errors and ensuring Idempotency

  • Documentation and Governance


If you're working with event-driven architectures, come learn how to avoid these common (and painful) pitfalls. 

Level L300

3f4edccd-cf1c-4658-bafc-df067e8c946a_720x720.webp

Yan Cui

Agentic Microservices: The Next Evolution of Microservices Architecture

Matheus Guimaraes | Senior Developer Advocate: AWS

Microservices architecture is evolving. AI agents are reshaping how backend systems are built and consumed. Discover Agentic Microservices, a new evolution of microservices architecture, along with emerging patterns such as Microservices as Tools (MAT) and Agentic Monoliths. We also explore why serverless is a natural fit for agentic architectures and watch an agentic microservice in action.

Level L300

Mattheus G.jpg

Matheus Guimaraes

Paving the Way: Following Golden Paths on our journey from Monolith to Modern Architecture

Andrew Worden-Fitzpatrick | Staff Software Engineer: Perk
James Butherway | Staff DevOps Engineer: Perk

At Perk, we move fast. We are a Cloud and AI-native company with 380+ engineers spread across 6 global hubs and 6 time zones, autonomy is in our DNA. But with total squad autonomy comes a hidden tax: architectural fragmentation. How do you maintain "contextual consistency" in your software when dozens of independent teams are all building at once?

Our journey didn't start with a "grand rewrite." We began with a robust Django monolith that powered our early success. In this talk, we share our pragmatic approach to evolution: we aren't killing the monolith for the sake of it - we are "strangling" the parts that slow us down while keeping the parts that still provide value.

To bridge the gap between autonomy and consistency, we developed our Golden Paths. We’ll deep-dive into the theory of paved roads and demonstrate tk-create - our internal engine designed to serve production-ready AWS patterns. We’ll show how we use these paths to bake in security, observability, and AWS best practices by default, allowing a squad in Birmingham to build with the same "Perk DNA" as a squad in Barcelona, Berlin or Boston.

Key Takeaways:

  • The Autonomy Paradox: Balancing developer freedom with the need for global architectural consistency.

  • Pragmatic Migration: An example of services being extracted from the monolith and what we aim to leave behind.

  • Standardizing the "New World": How we define Golden Paths to provision DynamoDB, S3, SNS, SQS, APIGateway etc for our Lambda and ECS services.

  • Dealing with exceptions: How we quickly adopt AWS services to stop our patterns becoming a “Golden Cage”

  • tk-create & Tooling: Scaling infrastructure patterns across 380+ engineers without becoming a bottleneck and, looking to future, how AI can help us evolve our practises even further.

Level L300

Andrew W F.jpg

Andrew Worden-Fitzpatrick

James B.jpg

James Butherway

AWS Bedrock for serverless Intelligent Document Processing at DVSA

Shaun Hare | Principal Developer: DVLA

This talk will give a practical demonstration of the work we are doing build an intelligent document processing pipeline, using AWS textract, AWS Bedrock Data Automation and AWS Bedrock for LLM Model invocation. It will go through the why we want to do this, the architecture we developed and the challenges and results.

Level L200

SHAUN HARE.jpg

Shaun Hare

Building a real-time voice agent that feels conversational, not transactional

Matthew Wilson | Distinguished Software Engineer: Instil

Niall Keys | Software Engineer at Instil | AI

In 2025 we were tasked with building a real-time voice assistant for on-the-ground sales reps working in global Pharma. They were struggling to keep accurate notes after meetings, capture outcomes from conversations and update their CRM often doing this admin work in their own personal time.

In this talk we share the story of building a production-grade voice agent on AWS using Amazon Nova Sonic 2.

This application had a measurable real-world impact on work–life balance of the sales reps. However building something that feels conversational is very different from stitching together AI services.

We’ll cover the hard lessons learnt from building real-time voice applications:
- Keeping an agent on task over multi-turn conversations.
- Managing context windows.
- Handling turn detection in noisy, real-world environments.
- Architecting for conversations longer than Lambda’s 15 minute timeout.

We’ll also explore why evals and observability with CloudWatch become mission-critical when prompt tweaks and model swaps can subtly degrade behaviour in ways traditional monitoring won’t catch.

This session is for engineers building AI systems that need to feel conversational, not transactional and who are discovering that “serverless” sometimes means knowing when not to use Lambda and look at ECS Fargate instead.

Level L300

Matthew Wilson.jpg

Matthew Wilson

Nial keys.jpg

Niall Keys

Context Over Code: Why AI-DLC Is an Organisational Transformation, Not a Productivity Hack

Matt Houghton | AI, Data and Analytics Architect at CDL Software

AI coding assistants are everywhere, but faster code generation isn't the problem most teams need to solve. At CDL, we rolled out Amazon Q Developer to 150+ engineers and saw real results — but we quickly realised the bigger opportunity wasn't in the tools, it was in rethinking how we work.

This talk covers our journey from AI code companion to AI-Driven Development Lifecycle (AI-DLC): how we trialled and scaled AI tooling, why we built a Context Store to give AI access to decades of institutional knowledge, and what it actually takes to make AI-native development work in practice. We'll cover steering, semantic retrieval with Bedrock Knowledge Bases and MCP, verification debt, and why the developers who thrive aren't the fastest coders — they're the ones who curate context and own quality.

Level L200

Matt H.webp

Matt Houghton

Next-Generation Resilience Testing and Disaster Recovery with AI

Sudha Arumugam | Solutions Architect, AWS

Traditional disaster recovery strategies often fall short when addressing the intricate, evolving characteristics of contemporary cloud infrastructures, creating vulnerabilities in system resilience and regulatory compliance. Explore how AI-driven capabilities can strengthen resilience and disaster recovery on AWS. This methodology connects infrastructure intelligence with application-level validation, facilitating more robust disaster recovery readiness. You'll discover how to harness Large Language Models (LLMs) alongside AWS Resilience Hub and AWS Systems Manager to transform testing approaches, evaluate infrastructure configurations, and produce customized AWS Fault Injection Service experiments and recovery procedures. Gain hands-on insights into automated test creation using templates and master the art of prompt engineering.

Level L300

sudha-arumugam.jpg

Sudha Arumugam

Building Secure and Efficient SaaS Platforms on AWS Serverless

Satyen Fakey | Co-Founder & CTO of Unloq®

Mid-market and Enterprise companies aren’t short on GenAI demos. They’re short on trusted systems that turn insight into execution, stay within policy, and prove ROI. In this session, I’ll share how we’re building Strategic Intelligence on AWS: a governed decision control plane that converts KPIs, objectives, internal docs, and signals into approved actions, then records a measurable “impact receipt” each time.

We’ll cover a practical blueprint that balances tech and commercial outcomes:
• Grounded context with provenance, so outputs can be trusted
• Decision workflows using FastAPI, with AWS Lambda for event-driven steps and automation
• RDS Postgres + pgvector retrieval for traceability
• Policy-safe model and tool routing, including Bedrock where it fits
• Human-in-command approvals that create an audit trail
• An Impact Ledger, tracking expected vs realised outcomes over time

Live demo
A short end-to-end demo showing one workflow in action: decision, approval, and an Impact Ledger entry, plus what you measure after deployment.

Audience takeaway
A reusable approach for shipping strategy-grade AI on AWS that leadership can trust: faster decisions, safer execution, and measurable impact.

Level L200

Styan.jpg

Satyen Fakey

Strategic Intelligence on AWS: From Data to Decisions (Live AI Demo)

Guilherme Dalla Rosa | CTO at MerCloud

Let's go on a journey through the world of multi-tenant architectures on AWS using serverless technologies. In this talk, we will uncover the key aspects of multi-tenancy, including security, tenant isolation, and performance. We will learn how to utilise Cognito for authentication, DynamoDB to store millions of tenant-partitioned records and lambda for compute. We will also explore different deployment models and their tradeoffs, and, finally, we will learn how to implement policy-based isolation with IAM to keep our execution context tied to one specific tenant and avoid data leakage. By the end of this talk, you will feel more confident building SaaS applications on AWS with serverless technologies and you will have learned some of the many insights that come from the AWS Well-Architected SaaS Lens.

Level L300

3513408d-7901-4c6d-8d26-801edf1049a2_720x720.webp

Guilherme Dalla Rosa

AI Agents Demystified - Fundamental Concepts for Developers

Luc van Donkersgoed Principal Engineer @ PostNL

All software development seems to be about AI agents these days. If we’re to believe the hype they will solve everything from banking to groceries. There is something mysterious in all this buzz. How do AI agents work? Are they just an LLM autonomously deciding how to solve problems and how to interact with the world? In this talk we will pull back the curtain and see what makes AI agents tick – and we’ll see it all looks a lot like classic software engineering, including state management, authentication, databases, JSON parsing, observability and infinite loops. By the end of the talk you will know that developing an AI agent is not that different from any other application – and you’ll have all the building blocks to start creating agents yourself, today!

Level L300

Luke V.jpg

Luc van Donkersgoed

Building and Scaling Inference workloads on Amazon EKS (Workshop)

Robert Northard Principal Specialist Solutions Architect | Kubernetes | Containers | Accelerated Compute | GPUs: AWS | GTM

Join us for an immersive hands-on workshop exploring how to build and scale production-ready inference deployments on Amazon EKS using NVIDIA GPUs. As organizations move beyond experimentation to production deployment of GenAI applications, Kubernetes has emerged as a preferred platform for managing inference workloads at scale, offering robust orchestration, cost optimization, and enterprise-grade reliability.

Whether you're looking to deploy your first language model or scale existing inference workloads, this workshop will provide you with best practices and hands-on experience using industry-leading tools and frameworks. Learn directly from AWS experts who have helped organizations successfully deploy and manage large-scale GenAI infrastructure.

Through hands-on labs and real-world examples, you'll learn to master:

  • EKS cluster setup optimised for NVIDIA GPU workloads

  • Efficient model serving and scaling using vLLM

  • Distributed inference architecture implementation with Ray

  • Comprehensive monitoring and observability using Prometheus and Grafana

  • Best practices for production GenAI deployments on Kubernetes

  •  Agentic AI on Amazon EKS

Who should attend: DevOps/MLOps engineers, AI/ML developers, Solution Architects, Product owners (or similar functions/titles).
 

Please do not forget to bring your laptops as this is a hands-on workshop.

Robert Northard.jpg

Robert Northard

  • LinkedIn
  • YouTube
bottom of page