Large Language Model (LLM)

Large language model

Large language models are computerized neural networks with millions to billions of parameters that are pre-trained on vast amounts of unlabeled text using self-supervised or semi-supervised learning. This allows for massively parallel processing and has made older supervised models obsolete. LLMs have acquired an embodied knowledge about language, but also any inaccuracies and biases present in the corpora.

3 courses cover this concept

CS 229: Machine Learning

Stanford University

Winter 2023

This comprehensive course covers various machine learning principles from supervised, unsupervised to reinforcement learning. Topics also touch on neural networks, support vector machines, bias-variance tradeoffs, and many real-world applications. It requires a background in computer science, probability, multivariable calculus, and linear algebra.

No concepts data

+ 32 more concepts

COS 484: Natural Language Processing

Princeton University

Spring 2023

This course introduces the basics of NLP, including recent deep learning approaches. It covers a wide range of topics, such as language modeling, text classification, machine translation, and question answering.

No concepts data

+ 13 more concepts

CS 224V Conversational Virtual Assistants with Deep Learning

Stanford University

Fall 2022

This course focuses on the creation of effective, personalized, conversational assistants using large language neural models. It involves both theory and practical assignments, offering students a chance to design their own open-ended course project. Familiarity with NLP and task-oriented agents is beneficial.

No concepts data

+ 13 more concepts