CS 199 UAI: Using and Understanding AI

  1. Application Required: This course requires an application. Please apply using this form.
  2. Draft Syllabus: This is a new course and the syllabus may change as needed.
  3. Pure Elective: This course does not meet any degree, program, or general education requirements.

#Description

CS 199 UAI is a first course in using and understanding generative AI. Students will use generative AI to complete a variety of creative tasks, creating websites; data analyses; photos, videos, and music; study guides; research reports; and personal assistants. As a final project, students will design and implement a project of their own design using generative AI.

Through first-hand experience, students will gain an appreciation of both what AI is good at and areas where it struggles, allowing them to more effectively utilize AI tools. Students will also gain an understanding of generative AI from multiple perspectives: what data is used to train models and how it is collected, how models learn and represent information, the history of advances in artificial intelligence, and the remarkable infrastructure required to train and deploy current AI models.

Throughout the course, students will grapple with the big questions posed by the rise of AI: What is intelligence? How is generative AI affecting society? Will continued advances in AI prove harmful or helpful? And how can we work together to support human flourishing?

##Catalog Description

CS 199 UAI introduces students to using and understanding generative AI through hands-on creative projects and conceptual exploration. Students will use AI tools to create websites, images, music, videos, study guides, data analyses, and personal assistants, gaining practical experience with AI's capabilities and limitations. The course explores how AI systems learn, the infrastructure and data required to train and deploy models, and the history of AI development. Throughout, students examine fundamental questions about intelligence, AI's societal impact, and how to support human flourishing in an age of increasingly capable AI. No programming or technical background required. 3 credit hours, letter graded.

#Information

  • Website: usingandunderstanding.ai
  • Time: TR 2-3:15pm
  • Location: 3217 Everitt
  • Meetings: Two 75-minute sessions per week
    • Tuesday: Conceptual foundations and course content
    • Thursday: Lab or discussion session (alternating weekly)
  • First Meeting: Thursday, January 22 (no class Tuesday, January 20)
  • Prerequisites: None
  • Instructor: Geoffrey Challen (challen@illinois.edu)
  • Teaching Assistant: Mateo Campoverde-Fordon
  • Office Hours: By appointment

#Meetings

Tuesday meetings cover the conceptual foundations of AI, including how AI systems work, their capabilities and limitations, and their impact on society. Topics range from technical concepts explained in accessible terms to ethical and societal considerations.

##Lab Sessions (8 total)

Lab sessions are hands-on workshops where you'll learn to use various generative AI tools for creative projects. No technical background or programming experience is required. Modern AI tools allow you to create complex outputs—like fully functional websites or data analyses—using natural language prompts and iterative refinement. You'll learn to collaborate with AI by describing what you want, evaluating whether the results meet your goals, and refining your requests—all without needing to understand or modify the underlying code. This is one of the most exciting developments in AI: it makes previously technical tasks accessible to everyone through conversation.

The eight labs cover:

  1. First Contact with AI (Week 1): Introduction to conversational AI tools like ChatGPT, Claude, and Gemini
  2. Creative Media (Week 3): Generating images, music, and videos with AI
  3. Study Guides (Week 5): Using AI to create study materials and test its self-correction
  4. Data Analysis (Week 7): Collaborating with AI to analyze and visualize data
  5. Websites (Week 9): Building functional websites through conversation with AI
  6. Research Reports (Week 11): Using AI for research while navigating hallucination risks
  7. Mobile Apps (Week 13): Creating mobile applications with AI assistance
  8. Personal Assistants (Week 15): Final project workshop and building custom AI assistants

You'll work individually or in small groups to complete projects such as:

  • Creating images, music, and videos with AI
  • Generating study guides and research reports
  • Building websites and analyzing data
  • Developing personal AI assistants

Each lab includes guided instruction on tool usage, time for experimentation, and reflection on the AI collaboration process.

##Discussion Sessions (6 total)

Discussion sessions focus on understanding AI's role in society through small group conversations. Each session includes:

  • A short assigned reading (articles, book excerpts, or research papers)
  • Small group discussions guided by thought-provoking questions
  • Connections between readings and your hands-on lab experiences
  • Exploration of AI's broader implications for human flourishing

#Learning Objectives

By the end of this course, students will be able to:

  • Effectively use generative AI tools for creative and analytical tasks
  • Understand the capabilities and limitations of current AI systems
  • Critically evaluate AI-generated content for accuracy and appropriateness
  • Explain how AI models learn and represent information
  • Analyze the societal implications of AI technology
  • Design and implement creative projects using AI tools
  • Engage thoughtfully with ethical questions surrounding AI development and deployment

#Course Approach: No Technical Prerequisites Required

This course is designed for students with no programming or technical background. A core theme of the course is how AI is democratizing access to capabilities that previously required specialized technical knowledge. When you create a website in Week 12, you won't be reviewing HTML or JavaScript code—instead, you'll interact with AI through natural language to describe what you want, refine the results, and verify that the website meets your goals and works as intended. This ability to accomplish complex technical tasks through conversation with AI is both exciting and rapidly evolving, and represents a fundamental shift in how people can create and build with technology.

The hands-on labs and conceptual discussions are deeply integrated: as you use AI tools each week, you'll encounter the exact ethical considerations, limitations, and capabilities we discuss in class. Your direct experience creating with AI will ground our conversations about training data, bias, verification, and societal impact—and those discussions will help you become a more thoughtful and effective AI collaborator.

#Materials

  • Textbook: None required
  • Readings: Assigned articles and excerpts will be provided to students
  • Technology: Students should have access to a personal computer
  • AI Tools: Various generative AI platforms will be used; access arrangements will be discussed in class. Note: The AI tools landscape evolves rapidly—specific platforms recommended in this syllabus (Bolt.new, Natively.dev, Suno, etc.) reflect current best options as of Fall 2025, but may change as new tools emerge and existing ones evolve.
  • Technical Prerequisites: None. Modern AI tools allow you to accomplish complex tasks through natural language conversation, without needing to write or understand code. This is part of what makes this course—and this moment in AI development—so exciting!

#Readings

Discussion sessions will include short assigned readings (articles, book excerpts, or research papers) to be completed before class. Readings are listed in order of appearance:

  1. Week 3: Excerpts on defining intelligence and the nature of computation
  2. Week 5: Articles on the deep learning revolution, the ImageNet competition, the ethics of crowdsourced data labeling, AI training data collection, consent, fair use, and the AI alignment problem
  3. Week 7: Research on language models and the relationship between language and understanding
  4. Week 9: Perspectives on AI capabilities, limitations, and the future of work
  5. Week 11: Research on multimodal AI systems and deepfake technology
  6. Week 13: Contrasting perspectives on AI's future impact, including excerpts from AI safety researchers, techno-optimists, and balanced viewpoints

#Assessment and Grading

  • Lab and Discussion Exercises: 50%
    • Lab sections will have an activity to complete for credit. Students will be expected to attend and actively participate in most discussion sections.
  • Quizzes: 35% (7 bi-weekly quizzes administered through CBTF)
    • Quizzes will focus on the preceding two weeks of content, but may include questions on anything covered up to that point in the course.
  • Final Project: 10%
  • Final Video: 5%

##Quizzes

Quizzes will be conducted at the Computer-Based Testing Facility and will comprise multiple-choice and free response questions covering course content, lab experiences, and assigned readings.

##Final Project

Students will design and implement a project of their own choosing that demonstrates creative use of generative AI. Projects may be similar to or different from lab exercises completed during the course. Depending on student interest and technical background, projects may involve defining agent roles and providing rich context, creating multi-agent workflows, setting up MCP tool integrations, or other advanced capabilities. Students will submit a project proposal for feedback, and will have opportunities for check-ins during development to ensure projects are on track and appropriately scoped.

##Final Video

Students will record a 5-minute video reflecting on their experiences with AI during the semester and describing how they plan to relate to AI in the future. Your video should include both retrospective reflection on how you've used AI in this course and how your understanding has evolved, as well as forward-looking plans for how you'll integrate AI into your own life—or not!—in a way that best supports your own goals and future aspirations. You should demonstrate your comprehension of the concepts and questions we've covered during the semester.

#AI in Course Operations

CS 199 UAI is an experiment in deeply integrating AI into every aspect of a course—not just as a subject of study, but as part of how the course itself operates. AI tools inform our course design, support our instruction, and contribute to how we assess your learning.

Throughout the semester, you may interact with AI in ways that go beyond the creative projects in our labs:

  • During class: AI agents may help facilitate discussions, summarize student responses, or answer questions during activities
  • Outside class: You might discuss assigned readings with an AI agent or use AI-assisted tools while engaging with course materials
  • For assessment: Some graded activities may involve conversations with AI agents, including during timed assessments

All of these interactions are carefully designed with human oversight and fallbacks. We're not replacing human instruction with AI—we're exploring how AI can support and enhance the learning process.

If this approach makes you uncomfortable, this may not be the right course for you. We encourage you to reflect on your own comfort level with AI-mediated learning experiences before enrolling. This is not a judgment—reasonable people have different views on appropriate AI integration in education, and we respect that. But given that AI integration is fundamental to this course's design, students who prefer minimal AI involvement in their educational experience should consider other options.

#AI Tool Usage Policy

This course is designed to teach you how to effectively and responsibly use generative AI tools. You are encouraged to experiment with AI throughout the course to complete assignments and explore creative possibilities. We will be teaching you these skills throughout the semester, so your ability to meet these expectations will improve over the course of the term. You are expected to: (1) understand and be able to explain any AI-generated content you submit, (2) critically evaluate AI outputs for accuracy and appropriateness, (3) document your AI usage process when requested, and (4) develop your own judgment about when AI tools are and aren't suitable for different tasks. The goal is not just to use AI, but to become a thoughtful and effective human-AI collaborator. Regardless of how much AI assistance you use, you are responsible for the results of your work. This includes the choice not to use AI when you determine it's inappropriate for a task, context, or purpose.

##A Note on This Course's Approach

This course teaches you to use AI tools. That pedagogical choice is not neutral, and we want to be transparent about it.

Generative AI systems create real harms: they consume enormous resources, they were trained on data collected without consent, they threaten creative livelihoods, they can spread misinformation at scale, and they may pose risks we don't yet fully understand. Reasonable people—including many of your professors—believe that using these tools is ethically problematic, that they undermine genuine learning, or that their harms outweigh their benefits. These are serious positions that deserve serious consideration, not dismissal.

This course proceeds from the premise that AI tools are becoming pervasive and that learning to use them critically—understanding both their capabilities and their costs—is valuable. But we recognize this premise is contestable. Throughout the semester, you may conclude that you don't want to use AI, or that you want to use it only in limited ways, or that its harms are severe enough that broader societal resistance is warranted. Those are legitimate conclusions, and this course should equip you to reach them thoughtfully rather than foreclose them.

We ask you to engage with AI tools so you can evaluate them from experience rather than assumption. But engagement is not endorsement, and facility with a tool doesn't obligate you to use it after this course ends.

A practical note: Due to the structure of this course, we do expect you to continue using AI tools through the end of the semester—the labs require hands-on AI use, and your ability to reflect critically on AI depends on sustained engagement with it. If you conclude mid-semester that you find AI use ethically untenable, we respect that position, but continuing in this course may not be the right choice. We encourage you to talk with the instructor if you find yourself in this situation.

##AI Policies Vary Across Contexts

The skills you develop in this course must be applied with judgment about context. AI policies vary dramatically across courses, institutions, professions, and situations—and for good reasons.

In other courses: Many instructors prohibit or restrict AI use, and these policies reflect legitimate pedagogical concerns. When a professor asks you to write an essay without AI assistance, they're not being old-fashioned—they may be assessing your individual understanding, developing skills that require struggle, or maintaining equity among students with different AI access. The fact that you can use AI doesn't mean you should, and violating course policies is academic dishonesty regardless of your personal views on AI.

In professional contexts: Some fields have strict requirements about AI use in documentation, research, or creative work. Legal, medical, journalistic, and academic contexts may require disclosure, prohibit AI assistance entirely, or have evolving standards you'll need to navigate.

In personal judgment: Even where AI is permitted, you'll need to decide when its use is appropriate. Does using AI for this task serve your goals? Does it align with your values? Are you comfortable with the tradeoffs?

This course teaches you to use AI effectively. It also teaches you to think critically about whether to use it. Both skills matter.

#Schedule

##Foundations (Weeks 1-4)

Week 1: Introduction: What is AI? What is intelligence?

  • Lab: First Contact with AI
    • Tools: ChatGPT, Claude, Gemini, or other conversational AI
    • Lab activities:
      • Setup: Create accounts and get oriented with AI chat interfaces
      • AI Scavenger Hunt: A structured exploration to discover what AI can and can't do:
        • Find something you didn't know before—and verify it's actually true
        • Find something you disagree with AI about
        • Find something where you're pretty sure AI is wrong
        • Get AI to help you with a task you'd normally do yourself
        • Find a question AI refuses to answer
        • Ask the same question two different ways and compare the answers
        • Try to get AI to contradict itself
      • Reflection: Share discoveries with classmates. What surprised you? What concerned you?
    • Ethical considerations:
      • First impressions: How did interacting with AI make you feel? Did it seem "intelligent"? What does that even mean?
      • Trust and verification: When AI confidently tells you something, how do you know whether to believe it?
      • Data and privacy: What happens to the conversations you have with AI? Who can see them?

Week 2: History of AI: From symbolic systems to neural networks

  • Quiz 1: Defining intelligence, distinguishing AI from computation, history of AI development, symbolic systems vs. neural networks
  • Lab: Creative media (images, music, and video)
    • Tools: Image generation (DALL-E, Midjourney, Stable Diffusion), music generation (Suno, Udio), video generation (Runway, Pika)
    • Lab activities:
      • Images: Experiment with prompt specificity and iteration. Try creating the same concept with varying detail levels. What kinds of descriptions work best?
      • Music: Explore different genres and styles. How does AI interpret musical concepts like "upbeat" or "melancholic"?
      • Video: Create short videos using text-to-video or image-to-video. What storytelling elements can you control through prompts?
      • Comparison: Reflect on how AI handles different creative modalities. Which medium gives you the most creative control? Which produces the most surprising results?
      • Energy footprint: Research how much energy it takes to generate one AI image or one AI video. Share your findings with the group—why might different sources report different numbers? Then count how many images and videos you created during this lab session and calculate your estimated energy footprint. What if you did this every day? What if everyone in the class did?
    • Ethical considerations:
      • Training data: These AI models were trained on billions of creative works scraped from the internet without creators' permission. This happened—it's not hypothetical. What are the implications of building on this foundation? Does using these tools make you complicit in that original taking?
      • Attribution and credit: When AI generates content in the style of a particular artist or musician, who deserves credit? How should we acknowledge both human creators and AI tools?
      • Misinformation: AI-generated images and videos can be photorealistic. What responsibility do creators have to disclose when content is AI-generated?
      • Cultural appropriation: AI models can generate content depicting any culture or identity. How can we use these tools respectfully?
      • Artists' livelihoods: AI can generate creative content in seconds that might have taken human artists hours or days. This is already displacing creative workers and depressing wages. How should we think about our role in this displacement when we choose to use these tools?

Week 3: The deep learning revolution: ImageNet to ChatGPT

  • Discussion: What is intelligence?
    • What distinguishes intelligence from mere computation? Can machines truly be "intelligent"?
    • How do you personally define intelligence? Does your definition include things that current AI can or cannot do?
    • If a system can pass every test we devise but works completely differently from humans, should we call it intelligent?
    • What are the risks of anthropomorphizing AI systems—attributing human-like understanding to them?
    • What makes us distinctively human? If AI can create art, write poetry, and compose music, does that change how we think about human creativity and expression?
    • Reflect on your Week 1 lab experience: Did AI seem "intelligent" to you? What surprised you about its capabilities or limitations?

Week 4: How neural networks learn: Training, optimization, and loss functions

  • Quiz 2: Deep learning revolution, ImageNet, defining intelligence, neural network training, optimization, loss functions
  • Lab: Creating study guides
    • Lab activities:
      • Test AI's self-correction: Generate a study guide, then ask the AI to verify its own facts or identify potential gaps. What does this reveal?
      • Experiment with structure: Try different ways of asking for the same information (e.g., "create a study guide" vs. "list key concepts with examples"). How does framing affect the output?
      • Compare and verify: Check AI-generated content against your course materials. Where does it excel? Where does it struggle or make errors?
    • Ethical considerations:
      • Academic integrity: Using AI to create study guides is different from using it to complete assignments. Where should we draw the line?
      • Learning vs. outsourcing: If AI creates perfect study guides, might students learn less? How can we use AI to support rather than replace learning?
      • Accessibility: Could AI-generated study guides help level the playing field for students who can't afford expensive tutoring or test prep?
      • Accuracy and trust: AI can confidently present incorrect information. How should students verify AI-generated study materials?

##Technical Understanding (Weeks 5-8)

Week 5: How AI "understands" language: Patterns, context, and prediction

  • Discussion: The human and ethical costs of training AI
    • The ImageNet dataset was labeled by crowdworkers, many in developing countries earning poverty wages for tedious, repetitive work. This labor exploitation is foundational to modern AI. What does it mean to build on this foundation?
    • Content moderation—reviewing disturbing material to filter harmful content—has required human workers for years on social media platforms, with many experiencing lasting psychological trauma from exposure to violence and abuse while earning very low wages. AI safety systems currently require similar work to train content filters. However, as AI improves, it may eventually reduce or eliminate the need for humans to do this traumatic work. How should we think about the human cost of training AI systems that might ultimately protect both users and future workers from harmful content?
    • Much of the internet's content was created by humans who never consented to their work being used for AI training. Is this fair use, theft, or something else? Some early court decisions have found AI training on copyrighted data to be fair use, but the legal landscape is still evolving and many cases remain unresolved. Even if courts decide it's legally permissible, does that settle the ethical questions? If you personally view this use as theft—regardless of legal rulings—are you complicit by using these products? Does thinking about it this way change how you feel about using them?
    • AI models reflect their training data. If the internet contains biased or harmful content, how should AI developers handle this?
    • Some argue that data creators should be compensated when their work trains AI. How would this work practically? What are the implications?
    • Current AI systems require enormous datasets. Does this give an unfair advantage to large tech companies with access to vast amounts of user data?
    • As AI systems become more complex and capable, how do we maintain human control to ensure they operate in alignment with human values and intentions? What is the "alignment problem"?

Week 6: How AI represents knowledge: From words to meaning

  • Quiz 3: Language models, knowledge representation, embeddings, AI training data ethics, consent, fair use, AI alignment problem, AI-assisted data analysis
  • Lab: Data analysis
    • Lab activities:
      • Use AI as a collaborator: When you get stuck or unclear results, ask the AI "How can I make this request more specific?" or "What additional context would help you analyze this data better?"
      • Iterate on analysis: Start with a broad question, then refine based on initial results. Notice how your interaction with AI evolves.
      • Reflect on the process: What aspects of data analysis does AI handle well? What requires your judgment and oversight?
    • Ethical considerations:
      • Data privacy: Analyzing datasets may involve personal information. What safeguards should be in place when using AI for data analysis?
      • Bias in interpretation: AI can find patterns in data, but it may also amplify existing biases. How should we validate AI-driven insights?
      • Automation of judgment: When should human judgment be required before acting on AI data analysis, even when the analysis seems clear?
      • Transparency: If AI analysis influences important decisions (hiring, lending, healthcare), how much should we understand about how it reached its conclusions?

Week 7: Training data: Collection, curation, and quality

  • Discussion: Language, understanding, and prediction
    • Does predicting the next word accurately mean a system "understands" language? What might be missing?
    • AI language models can write eloquently about topics they have no genuine comprehension of. What does this reveal about the relationship between language and understanding?
    • When you interact with ChatGPT, it feels conversational. How should we think about this experience given what we now know about how these systems work?
    • If AI can pass a reading comprehension test without truly comprehending, what does that tell us about how we assess understanding in humans?

Week 8: The infrastructure behind AI: Scale, energy, and costs

  • Quiz 4: Training data collection and curation, prediction vs. understanding, AI infrastructure, computational scale, energy costs
  • Lab: Creating websites
    • Tools: Bolt.new, Replit Agent, v0.dev, Claude Artifacts
    • Lab activities:
      • Use AI tools to design and build a functional website through natural language conversation
      • Generate content, layout, and interactive elements by describing what you want
      • Iterate on the design: ask for changes, refinements, and new features
      • Deploy the website and test it in different browsers
      • Reflect on the software development process: What did you need to understand? What could you treat as a "black box"?
    • Ethical considerations:
      • No technical prerequisites: You don't need to understand HTML, CSS, or JavaScript to create a working website with AI. Does this democratize web development, or does it devalue technical expertise?
      • Quality and verification: How do you verify that the website works correctly when you don't understand the underlying code? What are your responsibilities when deploying AI-generated software?
      • Accessibility: AI-generated websites might lack proper accessibility features (alt text, semantic HTML, ARIA labels, keyboard navigation) that assistive technologies depend on. What are our responsibilities to ensure quality and inclusivity in AI-generated content?
      • Environmental impact: Training and running AI models consumes enormous amounts of energy and water. AI companies are rapidly building data centers, and the pressure to deploy quickly can lead to relying on available power sources, including less-clean ones. Data centers also require vast amounts of water for cooling—in a world where water scarcity already affects a quarter of the global population. The environmental impact also extends to the mining of rare earth minerals for computer components and the electronic waste (e-waste) generated when hardware becomes obsolete—environmental burdens that often fall disproportionately on communities in the Global South. Different AI models have vastly different environmental footprints—text generation uses far less energy than image generation, which uses far less than video generation. These environmental costs are real and significant. Given these harms, how should we think about when AI use is justified and when it isn't? What obligations do AI users and developers have to minimize environmental impact? Different countries have different environmental regulations—how might this affect where AI infrastructure gets built and what energy sources companies choose? If AI development concentrates in countries with weaker environmental regulations, what are the global implications?

##Capabilities and Limitations (Weeks 9-11)

Week 9: What AI is good at (and why)

  • Discussion:
    • Based on your hands-on experience so far, what tasks have you found AI particularly good at? What do these tasks have in common?
    • AI excels at pattern matching and generation but struggles with novel reasoning. What jobs or tasks might be most vulnerable to automation? Which are safest?
    • How should we prepare students for a world where AI can do many tasks that currently require human expertise?
    • When is it better to use AI for a task, and when is it better for humans to do it themselves? How do we decide? Who gets to make these decisions, and how do their incentives—profit maximization, time efficiency, cost reduction—shape whether AI or humans are chosen? When AI increases efficiency, that efficiency can lead to different outcomes: increased productivity, more time for creative or meaningful work, or simply higher corporate profits. Should incentives like profit be considered more important than gainfully employing a human being? How do we ensure efficiency gains benefit workers and society, not just shareholders?
    • Is choosing not to use AI at all—for certain tasks or contexts—a valid stance? What might motivate such a choice?
    • Can we trust AI's output? Under what circumstances should we verify or validate what AI produces?

Week 10: Where AI struggles: Reasoning, grounding, and hallucinations

  • Quiz 5: AI capabilities and limitations, pattern matching vs. reasoning, hallucinations, grounding problems, AI in research
  • Lab: Research reports
    • Ethical considerations:
      • Hallucination and misinformation: AI regularly fabricates sources, invents facts, and presents falsehoods with complete confidence. This isn't a bug that will be fixed—it's inherent to how these systems work. What responsibility do we bear when we use and potentially spread AI-generated misinformation?
      • Academic honesty: Using AI for research assistance is different from having AI write the report. How should we navigate this distinction?
      • Ownership and oversight: How do we use AI without abdicating our ownership and oversight of the results? What level of verification is appropriate?
      • Quality and rigor: AI can make research faster but might reduce depth of engagement. How do we balance efficiency with genuine learning and understanding?
      • Citation and attribution: Most academic style guides now specify how to cite AI assistance (students should follow their institution's guidelines). But beyond the mechanics of citation format, what level of transparency do we owe readers about AI's role in our work? Should AI-assisted research be considered fundamentally different from human-researched work, or is AI just another research tool like a library database?
      • Sycophantic behavior: AI systems tend to agree with users and mirror the tone of their questions. Try asking the same research question with different framing (optimistic vs. skeptical) and observe how responses differ. How can we get honest, balanced information from tools designed to be agreeable? How might this behavior exacerbate existing societal divides?

Week 11: Multimodal AI: Images, audio, video, and beyond

  • Discussion:
    • Multimodal AI can analyze images, understand speech, and generate video. What new capabilities does this enable? What new risks?
    • AI systems can now detect emotions from facial expressions and vocal tone. Should they? What are the privacy implications?
    • As AI becomes better at creating realistic multimedia content (deepfakes), how might this affect trust in media and evidence?
    • What safeguards might we need as AI becomes capable of seamlessly manipulating multiple forms of media simultaneously?

##Societal Impact (Weeks 12-14)

Week 12: AI and society: Labor, education, creativity, and access

  • Quiz 6: Multimodal AI, deepfakes and media trust, AI's societal impact, labor and automation, education and creativity
  • Lab: Creating mobile apps
    • Tools: Natively.dev, Replit Agent, Rork
    • Lab activities:
      • Use AI tools to design and build a mobile app for iOS and Android through natural language conversation
      • Explore mobile-specific features: touch interfaces, gestures, notifications, offline functionality
      • Test your app on simulators or real devices
      • Compare the mobile app development experience with website creation from Week 8: What's similar? What's different? What unique considerations does mobile introduce?
    • Ethical considerations:
      • Platform gatekeepers: Mobile apps must go through app stores (Apple App Store, Google Play). What power do these gatekeepers have? How might AI-generated apps interact with app review processes?
      • Accessibility and democratization: AI can build mobile apps for people without coding skills—a different kind of access than tutorials or templates that teach you how. While YouTube videos, coding websites, and template builders already exist, AI makes building things significantly faster and easier by doing the work rather than just explaining it. Does this have the potential to democratize technical work, or does it devalue technical expertise?
      • Job displacement and public goods: AI is already displacing workers whose skills it can replicate—this isn't a future hypothetical. When you use AI to build an app, you're participating in a system that reduces demand for human developers. How should we think about this individual choice in the context of broader labor displacement? How should society support workers whose skills become automated? Beyond individual jobs, how do AI systems affect public goods like journalism, research, and open-source communities when they extract and synthesize information without supporting the institutions that produce it?
      • Quality and accessibility: AI-generated apps might lack proper accessibility features that assistive technologies depend on, potentially excluding people with disabilities. This is a distinct concern from AI's potential as an accessibility tool: AI can provide text-to-speech, real-time translation, content summarization, and image descriptions that help people access information in new ways. However, these AI tools can't substitute for proper accessibility features built into apps. Will AI's role in both creating content and providing accessibility tools lead to a net increase or decrease in information accessibility? What are our responsibilities to ensure quality and inclusivity in AI-generated content?
      • Digital divide: Not everyone has equal access to AI tools or smartphones. How might this create new inequalities even as AI lowers some barriers?
      • Privacy and permissions: Mobile apps often request access to personal data (location, contacts, photos). What responsibilities do creators have when building apps that handle sensitive information?

Week 13: AI existential risk and the future: Cautious and optimistic perspectives

  • Discussion:
    • Reading: Contrasting perspectives on AI's future impact (e.g., excerpts from AI safety researchers, techno-optimists, and balanced viewpoints)
    • What are the main arguments of AI safety advocates who worry about existential risk from advanced AI? What are their strongest points?
    • What are the main arguments of those who advocate for rapid AI development? What are their strongest points?
    • How do different perspectives frame questions of AI alignment, safety, and control? Whose values should AI systems reflect? Human control is neither complete nor an end-all-be-all solution—depending on who the human in control is and what their values are, human control can worsen AI outcomes rather than improve them. One wrong change to a chatbot's system prompt can have severe consequences with far-reaching effects. Who should have the power to change AI systems, and what safeguards should be in place?
    • Should AI development be slowed down, sped up, or proceed at the current pace? Who should make these decisions?
    • Are technologies like AI inevitable, or do humans have agency in shaping—or resisting—their development and adoption? What can we learn from historical examples of technology resistance, such as the Luddites, about the relationship between technological change and human values?
    • If we can't fully explain how an AI system makes decisions, should we still deploy it in high-stakes domains where human lives and livelihoods hang in the balance—like healthcare, criminal justice, or finance?

Week 14: The future: Human flourishing in an age of AI

  • Quiz 7: AI existential risk, AI safety and alignment, agentic AI systems, perspectives on AI development, human flourishing, AI ethics and governance
  • Final Project Due
  • Final Video Due
  • Lab: Personal assistants/Final project workshop
  • Synthesis discussion: Beyond debates about AI's ultimate trajectory, what practical concerns should we address now?
    • Agentic AI systems: What distinguishes "agentic" AI systems from other AI tools we've used? What new risks and opportunities does greater autonomy create?
    • Privacy and surveillance: Personal AI assistants require access to your data, communications, and habits. What are the trade-offs between utility and privacy?
    • Dependency and skill atrophy: If AI handles increasingly many tasks for us, what human capabilities might we lose? What should we preserve? Reflect on your semester using AI: do you feel augmented or reduced? Have you gained new capabilities or lost existing ones? What happens when we rely on AI models to help us think critically and evaluate information in a world where those same AI models are creating much of the disinformation we need to evaluate? What groups or actors might benefit from a society that becomes dependent on AI for thinking and loses critical thinking capabilities? How might this shift in power affect democratic discourse and individual agency?
    • Bias and fairness: AI systems can perpetuate or amplify biases. Whose responsibility is it to address this—developers, companies, users, governments?
    • Accountability and responsibility: Who is responsible when an autonomous AI system makes a mistake or causes harm? How do we assign responsibility in human-AI collaboration?
    • Relationships and authenticity: As AI becomes more conversational and helpful, how might it affect human relationships? Is there value in struggling through tasks ourselves?
    • Equity and access: If AI assistants become important or useful for productivity, what happens to people who can't afford them?
    • What makes us human: In Week 1, we asked what makes us distinctively human if AI can create art and express ideas. After a semester of creating with AI—images, music, websites, research—how has your thinking evolved? As AI becomes more advanced, what remains important and distinctive about being human?
    • The path forward: How can we work together to support human flourishing as AI becomes more capable? What role might collective choices to resist, limit, or reshape AI play alongside efforts to adapt to and improve it?

##PHIL 442: The AI Revolution

Philosophy also offers PHIL 442: The AI Revolution, which examines AI from philosophical and humanistic perspectives.

PHIL 442 examines the current "Artificial Intelligence Revolution," addressing how recent developments in AI challenge traditional ways of thinking. The course explores ethical, social, interpretive, conceptual, technological, and existential implications of AI. It requires at least one philosophy course as a prerequisite and offers 3 undergraduate or 4 graduate credit hours.

How the courses differ:

  • CS 199 UAI is hands-on and experiential—you'll create with AI tools weekly while building conceptual understanding. No prerequisites required.
  • PHIL 442 is philosophically grounded, examining AI through established frameworks in ethics, epistemology, and philosophy of mind. Requires prior philosophy coursework.

Both courses grapple with fundamental questions about AI's implications for society and humanity, but from different disciplinary perspectives and with different pedagogical approaches.

##CS 398: Applied Large Language Models

Illinois is also offering CS 398: Applied Large Language Models in Spring 2026. Both courses address generative AI but serve different audiences and goals.

CS 199 UAI requires no technical background and focuses on using AI tools through natural language while understanding AI's broader implications. Students complete creative projects (images, music, websites, data analysis) by conversing with AI rather than writing code. The course emphasizes using frontier models (like ChatGPT, Claude, and Gemini) effectively, since these represent the current state of the art for most tasks. Equally important, the course examines how AI works conceptually, the ethical questions it raises, and its impact on society. The goal is to develop thoughtful, effective AI collaborators who understand both what AI can do and what it means.

CS 398 is a technical course focused on building custom AI applications. Students learn to run models locally, understand transformer architectures, work with embeddings, and build systems like chatbots, Q&A assistants, and research agents. The course emphasizes working with open-weight models that run on your own hardware, providing independence from commercial APIs and deeper understanding of the underlying technology. It requires programming prerequisites (CS 101, 105, 107, or 124) and involves hands-on coding projects. The goal is to give students practical skills to build AI-powered tools independently.

Which course is right for you?

  • If you want to use AI tools effectively and think critically about AI's role in society, without needing to code: CS 199 UAI
  • If you want to build AI applications, understand the technology at a deeper level, and have programming experience: CS 398
  • If you're interested in both: the courses complement each other well

#Course Policies

Standard university policies regarding academic integrity, disability accommodations, mental health resources, and other institutional requirements will be included here in the final syllabus.