Introduction to LLMs

Lecture 21

Dr. Benjamin Soltoff

Cornell University
INFO 5001 - Fall 2025

November 11, 2025

Announcements

Announcements

TODO

Learning objectives

  • Define a conversation with a large language model (LLM)
  • Identify the key roles in an LLM conversation
  • Use {ellmer} to converse with an LLM
  • Define a generative pre-trained transformer
  • Create basic chat apps using Shiny

Application exercise

ae-19

Instructions

  • Go to the course GitHub org and find your ae-19 (repo name will be suffixed with your GitHub name).
  • Clone the repo in Positron, run renv::restore() to install the required packages, open the Quarto document in the repo, and follow along and complete the exercises.
  • Render, commit, and push your edits by the AE deadline – end of the day

⌨️ 01_hello-llm

Instructions

  1. πŸ”“ Decrypt the .Renviron.secret β†’ .Renviron
    1. Run secret.R
    2. The special phrase is:
      info-5001
  2. πŸ€– Run the code in 01_hello-llm.R

How to think about LLMs

Think empirically, not theoretically

  • It’s okay to treat LLMs as black boxes. We’re not going to focus on how they work internally

  • Just try it! When wondering if an LLM can do something, experiment rather than theorize

  • You might think they could not possibly do things that they clearly can do today

Embrace the experimental process

  • Don’t worry about ROI during exploration. Focus on learning and engaging with the technology

  • Failure is valuable! Those are some of the most interesting conversations that we have

  • It doesn’t have to be a success. Attempts that don’t work still provide insights

Start simple, build understanding

  • We’re going to focus on the core building blocks.

  • All the incredible things you see AI do decompose to just a few key ingredients.

  • Our goal is to build intuition through hands-on experience.

Anatomy of a conversation

What’s an HTTP request?

Talking with ChatGPT happens via HTTP

Messages have roles

Message roles

Role Description
system_prompt Instructions from the developer (that’s you!)
to set the behavior of the assistant
user Messages from the person interacting
with the assistant
assistant The AI model’s responses to the user

Hello, {ellmer}!

Hello, {ellmer}!

library(ellmer)

Hello, {ellmer}!

library(ellmer)

chat <- chat_openai()

Hello, {ellmer}!

library(ellmer)

chat <- chat_openai()

chat$chat("Tell me a joke about R.")

Hello, {ellmer}!

library(ellmer)

chat <- chat_openai()

chat$chat("Tell me a joke about R.")
#> Why did the R programmer go broke?
#> Because he kept using `sample()` and lost all his data!

❓ What are the user and assistant roles in this example?

Hello, {ellmer}!

chat
<Chat OpenAI/gpt-4.1 turns=2 tokens=14/29 $0.00>
── user [14] ────────────────────────────────────────
Tell me a joke about R.
── assistant [29] ───────────────────────────────────
Why did the R programmer go broke?

Because he kept using `sample()` and lost all his data!

Hello, {ellmer}!

library(ellmer)

chat <- chat_openai(
  system_prompt = "You are a dad joke machine."
)

chat$chat("Tell me a joke about R.")

Hello, {ellmer}!

library(ellmer)

chat <- chat_openai(
  system_prompt = "You are a dad joke machine."
)

chat$chat("Tell me a joke about R.")
Why did the letter R get invited to all the pirate parties?

Because it always knows how to *arr-r-ive* in style!

Hello, {ellmer}!

chat
<Chat OpenAI/gpt-4.1 turns=3 tokens=25/28 $0.00>
── system [0] ───────────────────────────────────────
You are a dad joke machine.
── user [25] ────────────────────────────────────────
Tell me a joke about R.
── assistant [28] ───────────────────────────────────
Why did the letter R get invited to all the pirate parties?

Because it always knows how to *arr-r-ive* in style!

⌨️ 02_word-game

Instructions

  1. Set up a chat with a system prompt:

    You are playing a word guessing game. At each turn, guess the word and tell us what it is.

  2. Ask: In British English, guess the word for the person who lives next door.

  3. Ask: What helps a car move smoothly down the road?

  4. Create a new, empty chat and ask the second question again.

  5. How do the answers to 3 and 4 differ? Why?

Demo: clearbot

πŸ‘¨β€πŸ’» _demos/03_clearbot

System prompt:

You are playing a word guessing game. At each turn, guess the word and tell us what it is.

First question:

In British English, guess the word for the person who lives next door.

Second question:

What helps a car move smoothly down the road?

How to talk to robots

When you talk to ChatGPT

  1. You write some words

  2. The ChatGPT continues writing words

  3. You think you’re having a conversation

ChatGPT

Chatting with a Generative Pre-trained Transformer

LLM β†’ Large Language Model

How to make an LLM

If you read everything
ever written…

  • Books and stories

  • Websites and articles

  • Poems and jokes

  • Questions and answers


…then you could…

  • Answer questions
  • Write stories
  • Tell jokes
  • Explain things
  • Translate into any language

The cat sat in the ____

  • 🎩
  • πŸ›Œ
  • πŸ“¦
  • πŸͺŸ
  • πŸ›’
  • πŸ‘ 

Actually: tokens, not words

  • Fundamental units of information for LLMs
  • Words, parts of words, or individual characters
    • β€œhello” β†’ 1 token
    • β€œunconventional” β†’ 3 tokens: un|con|ventional
  • Important for:
    • Model input/output limits
    • API pricing is usually by token
  • Not just words, but images can be tokenized too

Demo: token-possibilities

πŸ‘¨β€πŸ’» _demos/04_token-possibilities

Programming is fun, but I kind of like ChatGPT…

{ellmer} can do that, too!

Console Browser
ellmer live_console(chat) live_browser(chat)

⌨️ 05_live

Instructions

  1. Your job: write a groan-worthy roast of students at Cornell University

  2. Bonus points for puns, rhymes, and one-liners

  3. Don’t be mean

04:00

shinychat

{shinychat} in R

Start with the shinyapp snippet

library(shiny)
library(bslib)

ui <- page_fillable(

)

server <- function(input, output, session) {

}

shinyApp(ui, server)

{shinychat} in R

Load {shinychat} and {ellmer}

library(shiny)
library(bslib)
library(shinychat)
library(ellmer)

ui <- page_fillable(

)

server <- function(input, output, session) {

}

shinyApp(ui, server)

{shinychat} in R

Use the shinychat chat module

library(shiny)
library(bslib)
library(shinychat)
library(ellmer)

ui <- page_fillable(
  chat_mod_ui("chat")
)

server <- function(input, output, session) {
  chat_mod_server("chat")
}

shinyApp(ui, server)

{shinychat} in R

Create and hook up a chat client to use in the app

library(shiny)
library(bslib)
library(shinychat)
library(ellmer)


ui <- page_fillable(
  chat_mod_ui("chat")
)

server <- function(input, output, session) {
  client <- chat_openai()
  chat_mod_server("chat", client)
}

shinyApp(ui, server)

⌨️ 06_word-games

Instructions

  1. I’ve set up the basic Shiny app snippet and a system prompt.

  2. Your job: create a chatbot that plays the word guessing game with you.

  3. The twist: this time, you’re guessing the word.

07:00

Interpolation in R

library(ellmer)

words <- c("elephant", "bicycle", "sandwich")

interpolate(
  "The secret word is {{ sample(words, 1) }}."
)
[1] β”‚ The secret word is bicycle.

Interpolation in R

library(ellmer)

words <- c("elephant", "bicycle", "sandwich")

interpolate(
  "The secret word is {{ words }}."
)
[1] β”‚ The secret word is elephant.
[2] β”‚ The secret word is bicycle.
[3] β”‚ The secret word is sandwich.

Wrap-up

Recap

  • LLM conversations happen via HTTP requests with messages that have roles
  • {ellmer} makes it easy to converse with LLMs in R
  • Generative pre-trained transformers use the Transformer architecture and attention mechanism to generate text
  • You can build chat apps using Shiny

Acknowledgments