Which topics to teach in what context?¶
Participatory Event 2
Day 1: June 10th from 14:00 to 15:00 UTC-04:00 or ET
How to connect: Zoom via Underline
Introduction¶
When we design NLP courses in different contexts, many of us struggle with what topics/models/tasks to include or exclude in the given context. For example,
Should we teach hidden Markov models or automatic speech recognition in an introductory NLP course targeted towards linguistics majors?
Is it OK to skip semantic parsing from an introductory NLP course targeted towards computer science majors?
Should we skip details of LSTMs and focus more on transformer-based architectures in introductory NLP courses?
We also struggle with how to balance theory vs practice? How much detail is appropriate in the given context? For example,
Should we go into the details of Gibbs sampling when teaching Latent Dirichlet Allocation for topic modeling to non-computer science majors? Or is it more useful to spend more time on different practical aspects such as hyperparameters of the model or evaluation and interpretation of the topics given by the model?
How much time should we spend on teaching LSTMs vs showing how to implement them for different tasks using tools such as
PyTorch
orTensorFlow
?
In this participatory activity, you will be creating course schedules for NLP courses targeted at different NLP learner persona. You will be randomly assigned to a group of size 3 to 5 and you will be working in breakout rooms.
Consider the following three NLP learners with different expectations.