Large Language Models Explained (DDN1-E25)

Product code: DDN1-E25

Available Session

To register, you will be prompted to sign in.

May 25, 2026

Virtual

1:00 pm to 2:30 pm (ET)

Bilingual, with interpretation in both official languages


Closed captioning is provided for all events. Accommodation needs can be specified in a separate form after registration. For technical support or help registering for this event, please email:
learningevents-evenementsdapprentissage@csps-efpc.gc.ca

Overview

Delivery method

Delivery method

Online

Duration

Duration

1.5 hours

Audience

Audience

All public servants at all levels

 

Description

Large language models (LLMs) underpin many of today’s generative AI tools. At the same time, scrutiny around their risks, limitations, and trade-offs is increasing. A practical understanding of how LLMs work can help inform decisions about where and when generative AI should be used, particularly when it is embedded in organizational systems.

This event will explain how large language models are built, adapted, and used responsibly in practice, and will highlight leading Canadian research on model training, fine-tuning, guardrails, and related techniques. The event will also explain how LLMs generate outputs, how they differ from small language models, how they can support a range of work activities, and what considerations shape their effective and appropriate use.

Participants will deepen their understanding of LLM fundamentals, including their inherent limitations, biases, and risks, what safe and responsible use of LLM-powered tools looks like, key use cases, and how AI agents leverage LLMs to perform tasks.

This event is organized in partnership with National Research Council Canada.

Learn more about the Learning Week on Artificial Intelligence.

Speakers

    Speakers

    A headshot of Isar Nejadgholi wearing a pink suit, smiling.

    Isar Nejadgholi

    Senior Research Officer, Digital Technologies, National Research Council of Canada

    Dr. Isar Nejadgholi is a Senior Research Officer at the National Research Council of Canada and an Adjunct Professor at the University of Ottawa. Her experience spans both industry and research, focusing on evaluating and mitigating risks in AI systems, with an emphasis on safety, multilingual evaluation, and human-centred approaches. At NRC, she has contributed to several collaborations with the Canadian AI Safety Institute, including international safety evaluation efforts. Dr. Nejadgholi has published more than 70 peer-reviewed papers in top-tier journals and conferences, and her work aims to bridge technical innovation with societal impact in high-stakes and cross-cultural contexts.

    A headshot of Dan Lowcay, smiling.

    Dan Lowcay

    Program Technical Lead, Artificial Intelligence and Modernisation Services, Chief Information Officer Branch, National Research Council of Canada

    Dan Lowcay leads the National Research Council of Canada’s Artificial Intelligence and Modernisation Services (AIMS) team, responsible for developing and deploying custom generative AI-based solutions for NRC staff. Dan’s team has developed AI Zone (a Protected B approved generative AI platform), and more than 30 use-case specific tools based on proposals from across the NRC. Dan’s team also evaluates and configures enterprise AI solutions, provides training for NRC staff, and maintains guidelines for the responsible use of generative AI tools at work.

    A headshot of Louis Borgeat, smiling.

    Louis Borgeat

    Director, Research & Development and acting Principal Advisor for AI Safety, Digital Technologies, National Research Council of Canada

    Louis Borgeat is Director of Research and Development at the Digital Technologies Research Center (DTRC) at the National Research Council of Canada since 2019. He is also currently acting as Principal Advisor for AI Safety at DTRC. In these roles he supports research activities in natural language processing, bioinformatics, computer vision, AI Safety and responsible AI. He has previously led the Computer Vision and Graphics team for 12 years, and had joined NRC as a researcher in 2001. His own research background includes data analytics for 2D/3D imaging systems and computer vision/graphics, with a specific interest in large-scale analytics in health, defence, and industrial applications.

    Moderator

    A headshot of Chantal Desmarais Barton, smiling.

    Chantal Desmarais Barton

    Director, Special Initiatives & Executive Advisor to the Vice President, Digital Technologies, National Research Council of Canada

    Chantal Desmarais Barton is an experienced and innovative leader with over 25 years of driving advancements in social, educational, scientific, technological, and healthcare sectors. She serves as the Executive Advisor to the Vice President of Digital Technologies, as well as, Director of Special Initiatives. She oversees large-scale strategic initiatives in a wide range of areas including, partnerships, major capital investments, strategic planning, responsible AI and AI Governance. She started her career in the telecommunications industry conducting user needs assessment research. Prior to joining the NRC in May 2019, she held a variety of progressively challenging roles in the private, not-for-profit, and public sectors. She holds a Ph.D. in Experimental Psychology (Cognitive Psychology and Language) from the University of Ottawa and is an accredited Project Management Professional (PMP).

Date modified: 2025-07-22

Contact us