top of page

Evren Gursoy

Evren Gursoy

jerome sadler.jpg

 Jerome Sadler

Evren Gursoy

Title: Building and Running a Customised GenAI Solution on AWS

Description: 

Join us to explore the transformative potential of large language models (LLMs) in software engineering. This session is an opportunity for engineers and tech leaders to explore LLMs' practical applications, strengths, and limitations through a series of engaging topics and real-world examples. We will use RiverSafe’s NeuroLogik to guide our demonstrations and provide concrete illustrations of the concepts discussed.

 

Key Topics:

 

1

  • Real-Life Examples of LLM Strengths and Limitations:

  • Dive into Repositories: Learn how to leverage LLMs to understand the functionality of undocumented codebases.

  • Generate Multi-Format Answers: See how LLMs can create responses incorporating diagrams and tables for clearer communication.

  • Code Explanation in Plain English: Discover how LLMs can simplify complex code explanations.

  • Challenges and Contexts: Understand the challenges of chat context and the limited memory LLMs work with.

  • Effective and Ineffective Prompts: Identify the prompts that yield the best results and avoid those that don't.

  • Distinguishing Good Answers from Illusions: Learn strategies to safeguard your organisation against seemingly accurate but incorrect answers.

2.  Opening Organizational Eyes to LLM Limitations:

  • Equip your team with the knowledge to recognise and mitigate the limitations of LLMs.

3. Real-Time Interface vs. Offline :

  • Quick Task Handling: Explore the effectiveness of LLMs for addressing narrow, small-scope tasks in real-time.

  • Design Document Generation: See how offline LLM solutions can support comprehensive design document creation.

4. Identify Improvements and Correct Degradations in the quality of LLM responses:

  • Delve into the challenges of assessing LLM performance, especially when dealing with free-text output where multiple correct answers may exist.

5. Comparing LLMs to Human Expertise:

  • Making Mistakes: Recognize that, like humans, LLMs can make errors and understand the frequency and context of these mistakes.

  • Efficiency in Analysis: Compare how LLM solutions analyse codebases quickly, albeit with occasional hallucinations, vs. human experts who may take longer with subjective analysis.

  • Variability in Responses: Understand the probabilistic nature of LLMs, which leads to varied responses, akin to how different experts provide different opinions based on their experiences and priorities.

Why attend?

  1. Gain practical insights into implementing LLM solutions in enterprise software engineering

  2. Learn to identify and mitigate LLM limitations, protecting your organisation from potential pitfalls

  3. Enhance your ability to assess and implement emerging AI technologies in your organisation

Author Bio

Jerome Sadler is co-founder and Director of the AI-focussed startup FutureNow Solutions Ltd, and is an AI Partner for RiverSafe, working with RiverSafe at a FTSE 100 Company on the first in-house, AI-powered application that they approved for production use. Since completing his studies at Cambridge University, Jerome’s career has encompassed development and solutions architecture in Java and .NET in the Insurance (Xchanging) and Capital Markets industries (Coexis / GBST), Programme Management in the Telecommunications industry (Colt), as well as a variety of Technology Innovation consultancy roles. He sits on the Technology Advisory Board for the medical technology organisation CorporateHealth International. Latterly, he has focussed on the practical ways in which AI, in combination with more orthodox engineering approaches, can deliver meaningful outcomes to organisations.

bottom of page