Which topics to teach in what context?

  • Participatory Event 2

  • Day 1: June 10th from 14:00 to 15:00 UTC-04:00 or ET

  • How to connect: Zoom via Underline

Introduction

When we design NLP courses in different contexts, many of us struggle with what topics/models/tasks to include or exclude in the given context. For example,

  • Should we teach hidden Markov models or automatic speech recognition in an introductory NLP course targeted towards linguistics majors?

  • Is it OK to skip semantic parsing from an introductory NLP course targeted towards computer science majors?

  • Should we skip details of LSTMs and focus more on transformer-based architectures in introductory NLP courses?

We also struggle with how to balance theory vs practice? How much detail is appropriate in the given context? For example,

  • Should we go into the details of Gibbs sampling when teaching Latent Dirichlet Allocation for topic modeling to non-computer science majors? Or is it more useful to spend more time on different practical aspects such as hyperparameters of the model or evaluation and interpretation of the topics given by the model?

  • How much time should we spend on teaching LSTMs vs showing how to implement them for different tasks using tools such as PyTorch or TensorFlow?

In this participatory activity, you will be creating course schedules for NLP courses targeted at different NLP learner persona. You will be randomly assigned to a group of size 3 to 5 and you will be working in breakout rooms.

Consider the following three NLP learners with different expectations.

Jim

  • Jim is a 3rd year undergrad in computer science.
  • Courses taken before: introductory machine learning, introductory statistics, calculus, and linear algebra.
  • Goal: working in industry, preferably as a data scientist or a machine learning engineer.
  • Expectation from the course: Getting familiar with different algorithms to learn from text and audio data to produce insights.
  • Sam

  • Sam is a 4th year undergrad in political science.
  • Courses taken before: introduction to Python and introduction to statistics.
  • Expectation from the course: She is fascinated by day-to-day applications of NLP such as smart compose and voice assistants and is curious to know more about how they work. Also, she is interested in applying NLP tools in her discipline.
  • Eva

  • Eva is a first year Master’s student in NLP.
  • Courses taken before: Solid foundation in mathematics, machine learning, and introductory linguistics.
  • Goal: She wants to pursue a Ph.D. in deep learning NLP.
  • Expectation from the course: Interested in gaining solid understanding of important models, tasks, and tools in NLP so that they can carry out their research effectively.

  • Your tasks

    Go to this Google doc and follow the instructions there.