Does AI Understand?

Before Class

You should have read both articles before today's discussion (~50 minutes of reading):

Please complete the preparation conversation below before class. This is part of attendance for today's meeting.

Preparation Discussion

Log in to prepare for this discussion.


Today's Plan

Today you'll discuss the debate between Chiang and Somers in three rounds, each with a different partner and a different angle on the question. After each round, we'll hear from a few pairs before moving on.


In-Class Activity~85 min
1
Round 1: The Blurry JPEG~12 min
Partner work
2
Round 1: Report Out~3 min
3
Round 1: Share Out~10 min
4
Round 2: The Case for Understanding~12 min
Partner work
5
Round 2: Report Out~3 min
6
Round 2: Share Out~10 min
7
Round 3: The Writing Test~12 min
Partner work
8
Round 3: Report Out~3 min
9
Round 3: Share Out~10 min
10
Wrap-Up~5 min
11
Feedback~5 min

Log in to participate in this activity.

Log In
1

Round 1: The Blurry JPEG

Partner Activity

This activity involves working with a partner.

The Blurry JPEG

Chiang argues that LLMs are lossy compression — like a JPEG that looks like the original but has lost information. The Xerox photocopier story makes this vivid: a copier silently substituted similar-looking numbers, and nobody noticed for years. When ChatGPT "hallucinates," Chiang says, that's a compression artifact — the system is interpolating between things it's seen, and sometimes the interpolation is wrong.

Discuss with your partner: Is this a fair analogy? When you use ChatGPT or Claude and it produces something that sounds right, is it "understanding" your question or "decompressing" patterns from its training data? Does the distinction matter if the output is useful?

You can reference both articles during the discussion:

2

Round 1: Report Out

Log in to submit a response.

3

Round 1: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.

Log in to view discussion questions.

4

Round 2: The Case for Understanding

Partner Activity

This activity involves working with a partner.

The Case for Understanding

Somers' key move: compression requires understanding. Eric Baum argues that to compress the world's information, you must discover its deep structure. Doris Tsao found that the brain represents faces as points in a 50-dimensional space — a lossy compression scheme. Kanerva showed that Sparse Distributed Memory, a model of human memory, is "eerily similar" to the Transformer architecture. And Hofstadter — who spent decades arguing AI couldn't think — changed his mind after GPT-4, saying cognition IS "seeing as" and that LLMs do this. But he's "terrified" by it.

Discuss with your partner: If the brain compresses information using mechanisms similar to what LLMs use, does that change your view of Chiang's argument? What does it mean when Hofstadter — someone who dedicated his career to arguing against AI understanding — changes his mind? Is his emotional reaction (terror, grief) a reasonable response?

You can reference both articles during the discussion:

5

Round 2: Report Out

Log in to submit a response.

6

Round 2: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.

Log in to view discussion questions.

7

Round 3: The Writing Test

Partner Activity

This activity involves working with a partner.

The Writing Test

Chiang's strongest argument may be about writing. He says a first draft is "original ideas, poorly expressed" — you start with something to say and struggle to say it well. LLMs do the reverse: they produce polished prose without having had an original thought. Somers would counter that the sprinkler example shows LLMs can reason — GPT-4 correctly identifies which way water flows based on what appears to be genuine causal understanding.

Discuss with your partner: Think about your own writing process vs. how AI generates text. Is Chiang right that there's something fundamentally different happening? Or could the "original thought" be emerging inside the model in ways we can't see? What about AI-assisted writing — when you use AI to help you write, where does the "understanding" live? What would it mean for you personally if AI really does "understand" in some meaningful sense?

You can reference both articles during the discussion:

8

Round 3: Report Out

Log in to submit a response.

9

Round 3: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.

Log in to view discussion questions.

10

Wrap-Up

Closing Reflection

Three questions, three partners, many perspectives. We started with Chiang's analogy — AI as a blurry JPEG — and asked whether that's fair. We looked at Somers' response — the scientific evidence that compression and understanding might be the same thing. And we got personal — what does this debate mean for writing, for creativity, for how we think about our own minds?

These two articles won't be the last word. The question of whether AI "understands" will keep evolving as the technology does — and as our understanding of our own cognition deepens. Pay attention to when this debate shows up in your own interactions with AI.

11

Feedback

Log in to submit feedback.