Explore the Hugging Face ecosystem for NLP and generative AI. Use transformers, datasets, and model hub for text generation, classification, and fine-tuning.
Update 1.1 as of 29/01/2025Generative AILLMLangChainHuggingFaceOllamaOpenAIGeminiDeepSeekGoogle NotebookLMAzure AI Services (Azure OpenAI Service)Recent reviews: "Thorough explanation, going great so far. A very simplistic and straightforward introduction to Natural Language Processing. I will recommend this class to any one looking towards Data Science""This course so far is breaking down the content into smart bite-size pieces and the professor explains everything patiently and gives just enough background so that I do not feel lost.""This course is really good for me. it is easy to understand and it covers a wide range of NLP topics from the basics, machine learning to Deep Learning.The codes used is practical and useful.I definitely satisfy with the content and surely recommend to everyone who is interested in Natural Language Processing"Update 1.0 :Fasttext Library for Text classification section added.Hi Data Lovers,Do you have idea about Which Artificial Intelligence field is going to get big in upcoming year?According to statista dot com which field of AI is predicted to reach $43 billion by 2025?If answer is 'Natural Language Processing', You are at right place. Do you want to know How Google News classify millions of news article into hundreds of different category.How Android speech recognition recognize your voice with such high accuracy.How Google Translate actually translate hundreds of pairs of different languages into one
A comprehensive course on building, deploying, and optimizing AI models using Langchain and Hugging Face. It covers everything from the basics of Generative AI to advanced concepts like Retrieval-Augmented Generation (RAG) pipelines.
A comprehensive 7-course series that takes you from LLM business strategy to production deployment. You will learn to evaluate LLM opportunities, fine-tune models, and build production-ready applications using tools like Hugging Face and Python.
This IBM course explores transformers and key model frameworks like Hugging Face and PyTorch. It covers optimizing LLMs and advances to fine-tuning generative AI models using techniques like PEFT, LoRA, and QLoRA.
A free course with over 50 theoretical lessons and 10 practical projects, teaching how to train, fine-tune, and deploy LLMs into AI products. It covers SFT, RLHF, LoRA, and custom model training.
Unlock the power of Generative AI and learn how to build real-world applications using cutting-edge tools like ChatGPT, LangChain, Hugging Face, and more — even if you’re not a developer.This course starts with a fast-track module for non-coders, introducing you to practical no-code AI tools like Zapier, Canva AI, and Notion AI. You’ll quickly understand how Generative AI works — no math, no jargon, just clear and practical insights.You’ll then dive deep into Large Language Models (LLMs), learning how models like GPT and open-source alternatives function, and how to interact with them through effective prompt engineering. Understand the difference between OpenAI's APIs, local models, and when to use each.The course progresses with hands-on projects using the OpenAI API and LangChain to build intelligent assistants, custom chatbots, and agent-based tools. You’ll explore how to integrate tools and functions, use LangGraph for complex multi-step workflows, and build applications like weather and calculator agents.You'll also learn how to incorporate Hugging Face models, perform text classification, and explore LoRA fine-tuning basics — all with step-by-step guidance. The Retrieval-Augmented Generation (RAG) section will teach you how to connect AI with custom documents, PDFs, and websites using embeddings and vector databases like Pinecone, ChromaDB, and FAISS.We’ll also cover critical topics like AI safety, bias, responsible prompt engineering, and deploying your apps using tools like Streamlit, Gradio, and Hugging Face Spaces. You’ll even learn how to add a simple frontend with HTML/CSS/JS to showcase your work live.By the end of the course, you’ll complete real-world capstone projects such as a Social Media Post Generator and a Podcast AI Summarizer, and learn how to build a portfolio on GitHub that demonstrates your skills to potential clients or employers.Whether you're a developer, freelancer, entrepreneur, or aspiring AI bui
This course on Coursera provides skills to optimize and deploy domain-specific large language models for advanced Generative AI applications. It covers supervised fine-tuning, parameter-efficient methods (PEFT), and reinforcement learning with human feedback (RLHF).
This course covers everything from Large Language Models (LLMs) and prompt engineering to fine-tuning , as well as advanced concepts like Direct Preference Optimization (DPO). You'll also dive deep into Retrieval-Augmented Generation (RAG), which enhances your LLMs' capabilities by integrating retrieval systems for more accurate and superior responses.By the end of this course, you'll be equipped to create AI solutions that align perfectly with human intent and outperform standard models.What You Will GetIn addition to the core topics, our course features in-depth, real-world case studies on fine-tuning, prompt engineering, and Retrieval-Augmented Generation (RAG). These case studies not only highlight cutting-edge techniques but also offer practical, hands-on insights into their application in real-world AI projects. By exploring actual scenarios and projects, learners will gain a deep understanding of how to effectively utilize these methods to solve complex challenges. The case studies are designed to bridge the gap between theory and practice, enabling participants to see how these advanced techniques are deployed in industry settings.Moreover, these examples provide a step-by-step framework for applying theoretical concepts to real-world applications. Whether it's fine-tuning models for enhanced performance, engineering prompts for improved outputs, or leveraging retrieval systems to augment generation, learners will be able to confidently implement these strategies in their own projects. This ensures that by the end of the course, participants will not only have a solid foundation in generative AI concepts but also the ability to apply them in practical, impactful ways.
Welcome! This comprehensive course is designed for individuals eager to dive into the world of Large Language Models (LLMs) and harness their power to create innovative applications that can simplify tasks in everyday life.Course OverviewIn this course, you will learn how to effectively utilize various libraries and frameworks, including Ollama, LangChain, CrewAI, and Hugging Face, to build practical projects that demonstrate the capabilities of LLMs. Through hands-on projects, you will gain a deep understanding of how these technologies work together to enhance productivity and creativity.What You Will LearnUnderstanding LLMs: Gain insights into the architecture and functioning of Large Language Models, including their applications in natural language processing (NLP).Ollama and LangChain: Learn how to leverage Ollama for efficient model deployment and LangChain for building complex applications that integrate multiple components seamlessly.Hugging Face Transformers: Explore the Hugging Face library to access a wide range of pre-trained models for various NLP tasks.Practical Applications: Implement real-world projects that showcase the power of LLMs in different contexts.Project HighlightsLearning Python Tool with Ollama: Create an interactive tool that helps users learn Python programming through guided exercises and instant feedback using an LLM.Make a Video Describer: Develop an application that generates descriptive text for video content, enhancing accessibility and understanding for users.Chat with PDF using Ollama LLM: Build a chat interface that allows users to ask questions about the content of PDF documents, provi
Welcome to "Learn Hugging Face for Mastering Generative AI with LLMs". In today's AI-driven world, Hugging Face has become a central platform for working with Large Language Models (LLMs), which have revolutionized generative AI by enabling machines to generate human-like text, answer questions, and even create original content. This course is meticulously designed to give you a deep understanding of these models and how to harness their power using Hugging Face.Our journey begins with a robust introduction to LLMs, exploring their intricacies and how to manage their compute requirements, all within the Hugging Face ecosystem. From there, we dive into the world of Hugging Face, which provides an extensive collection of pre-trained models that can be applied in a wide range of innovative applications.Practical knowledge is essential, so the course transitions into a deep dive into Transformers, a key technology behind LLMs, with a special focus on Hugging Face implementations. You'll get hands-on experience with Hugging Face tools, manipulating datasets, building custom models, and mastering tokenization.Finally, we emphasize training, fine-tuning, and quantization, with models downloaded from Hugging Face. Learn how to adjust LLMs to your needs, whether for summarization or text generation. With techniques like Instruction Fine-tuning and PEFT, you'll master the art of fine-tuning models. We’ll even show you how to train a GPT-2 from scratch using Hugging Face to generate text from a custom dataset. Then finally, we will show you how to quantize your models so that they take up less memory.
Are you interested in harnessing the power of AI to create groundbreaking language-based applications? Look no further than LangChain and Gen AI - a comprehensive course that will take you from a novice to an expert in no time. Implement Generative AI (GenAI) apps with langchain framework using different LLMs.By implementing AI applications powered with state-of-the-art LLM models like OpenAI and Hugging Face using Python, you will embark on an exciting project-based learning journey.With LangChain, you will gain the skills and knowledge necessary to develop innovative LLM solutions for a wide range of problems.Here are some of the projects we will work on:Project 1: Construct a dynamic question-answering application with the unparalleled capabilities of LangChain, OpenAI, and Hugging Face Spaces.Project 2: Develop an engaging conversational bot using LangChain and OpenAI to deliver an interactive user experience.Project 3: Create an AI-powered app tailored for children, facilitating the discovery of related classes of objects and fostering educational growth.Project 4: Build a captivating marketing campaign app that utilizes the persuasive potential of well-crafted sales copy, boosting sales and brand reach.Project 5: Develop a ChatGPT clone with an added summarization feature, delivering a versatile and invaluable chatbot experience.Project 6: MCQ Quiz Creator App - Seamlessly create multiple-choice quizzes for your students using LangChain and Pinecone.Project 7: CSV Data Analysis Toll - Helps you analyze your CSV file by answering your queries about its data.Project 8: Youtube Script Writing Tool - Effortlessly create compelling YouTube scripts with this user-friendly and efficient script-writing tool.Project
This course equips you with the skills to build real-world NLP applications using transformer models from the Hugging Face ecosystem. You will gain hands-on experience with speech-to-text pipelines, sentiment analysis, and text generation.
Gain insights into fine-tuning LLMs with LoRA and QLoRA. Explore parameter-efficient methods, LLM quantization, and hands-on exercises to adapt AI models with minimal resources efficiently.
In partnership with AMD, this course teaches how to apply fine-tuning and reinforcement learning to improve LLM behavior, reasoning, and safety. You will learn about the post-training lifecycle, core techniques like RLHF and LoRA, and how to design evaluations to detect issues like reward hacking and diagnose failures.
This course, developed in collaboration with Hugging Face, teaches the fundamentals of model quantization. You will learn to compress large models, making them more accessible and efficient, using the Hugging Face Transformers library and Quanto.
Mastering LoRA Fine-Tuning on Llama 1.1B with the Guanaco Chat Dataset: Training on Consumer GPUsUnleash the potential of Low-Rank Adaptation (LoRA) for efficient AI model fine-tuning with our groundbreaking Udemy course. Designed for forward-thinking data scientists, machine learning engineers, and software engineers, this course guides you through the process of LoRA fine-tuning applied to the cutting-edge Llama 1.1B model, utilizing the diverse Guanaco chat dataset. LoRA’s revolutionary approach enables the customization of large language models on consumer-grade GPUs, democratizing access to advanced AI technology by optimizing memory usage and computational efficiency.Dive deep into the practical application of LoRA fine-tuning within the HuggingFace Transformers framework, leveraging its Parameter-Efficient Fine-Tuning Library alongside the intuitive HuggingFace Trainer. This combination not only streamlines the fine-tuning process, but also significantly enhances learning efficiency and model performance on datasets.What You Will Learn:Introduction to LoRA Fine-Tuning: Grasp the fundamentals of Low-Rank Adaptation and its pivotal role in advancing AI model personalization and efficiency.Hands-On with Llama 1.1B and Guanaco Chat Dataset: Experience direct interaction with the Llama 1.1B model and Guanaco chat dataset, preparing you for real-world application of LoRA fine-tuning.Efficient Training on Consumer GPUs: Explore the transformational capability of LoRA to fine-tune large language models on consumer hardware, emphasizing its low memory footprint and computational advantages.Integration with HuggingFace Transformers: Master the use of the HuggingFace Parameter-Efficient Fine-Tuning Library and the HuggingFace Trainer for streamlined and effective model adaptation.Insightful Analysis of the L
Do not take this course if you are an ML beginner. This course is designed for those who are interested in pure coding and want to fine-tune LLMs instead of focusing on prompt engineering. Otherwise, you may find it difficult to understand.Welcome to "Mastering Transformer Models and LLM Fine Tuning", a comprehensive and practical course designed for all levels, from beginners to advanced practitioners in Natural Language Processing (NLP). This course delves deep into the world of Transformer models, fine-tuning techniques, and knowledge distillation, with a special focus on popular BERT variants like Phi2, LLAMA, T5, BERT, DistilBERT, MobileBERT, and TinyBERT.Course Overview:Section 1: IntroductionGet an overview of the course and understand the learning outcomes.Introduction to the resources and code files you will need throughout the course.Section 2: Understanding Transformers with Hugging FaceLearn the fundamentals of Hugging Face Transformers.Explore Hugging Face pipelines, checkpoints, models, and datasets.Gain insights into Hugging Face Spaces and Auto-Classes for seamless model management.Section 3: Core Concepts of Transformers and LLMsDelve into the architectures and key concepts behind Transformers.Understand the applications of Transformers in various NLP tasks.Introduction to transfer learning with Transformers.Section 4: BERT Architecture Deep DiveDetailed exploration of BERT's architecture and its importance in context understanding.Learn about Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) in BERT.Understand BERT fine-tuning and evaluation techniques.Section 5: Practical Fine-Tuning with BERT</strong
Are you ready to boost your Python skills and explore the exciting world of Generative AI? This course is designed to help you ace certification exams and deepen your understanding of the essential Python tools used in generative AI development. With 50 practice questions based on real-world AI scenarios, you’ll test and expand your knowledge of Large Language Models (LLMs), Hugging Face Transformers, LangChain, and image generation frameworks like Stable Diffusion.Through this course, you’ll cover critical concepts in Python programming, AI model integration, and prompt engineering. The multiple-choice and multi-choice questions are structured to challenge you on real-world AI applications, helping you prepare for AI developer interviews, certification exams, and hands-on projects. Whether you're new to Python or already an experienced developer, this course is your perfect guide to mastering Generative AI technologies.Learn how to efficiently interact with AI libraries, manage data workflows, and develop advanced AI solutions. By the end of this course, you’ll be ready to apply your skills to create AI-driven applications and confidently face the challenges of Generative AI development.Why Take This Course?50 real-world-based MCQs & advanced AI problem-solvingPractical exposure to top AI frameworks & Python integrationIdeal prep for AI certifications and developer interviewsHands-on focus: LLMs, Hugging Face, LangChain, Stable DiffusionCertification Note:Upon successful completion of this course and its assessments, you are eligible for an official course certificate.Linked Topics:Python ProgrammingGenerative AILarge Language Models (LLMs)Hugging Face TransformersLangChainStable Diffusion</
Master Generative AI with LangChain and Hugging FaceUnlock the potential of generative AI and LLMs (Large Language Models) with our hands-on course. Dive deep into LangChain and Hugging Face, two of the most powerful tools in the AI space, and learn prompt engineering through practical examples. This course is designed to provide you with the skills to implement gen AI models effectively.Why Choose This Course?Generative AI is transforming industries from marketing to healthcare. Our course offers a unique opportunity to harness this technology effectively.Project-Based Learning: Engage in innovative projects, from text summarizers to text-to-video animations.Hands-On Expertise: Master LangChain and Hugging Face by applying them to real-world scenarios.Up-to-Date Knowledge: Work with the latest models and frameworks, staying ahead in the rapidly evolving AI landscape.What You’ll BuildThis course is structured around four key projects designed to teach you the practical applications of generative AI:Text Summarizer with GUIIntegrate LangChain components with Hugging Face's BART model.Load and summarize text from PDF documents.Design an intuitive graphical user interface (GUI) for a seamless user experience.Interactive AI Assistant with GUIDevelop a multi-functional assistant to handle summaries, queries, and more.Implement LangChain's query and summary handlers for efficiency.Create a user-friendly GUI and test the assistant's capabilities.Text-to-Image GeneratorTransform text inputs into visually stunning images using Hugging Face
This course teaches you how to use open-source models from the Hugging Face Hub for various tasks like NLP, audio, and image processing. You will learn to use the transformers library to perform these tasks with just a few lines of code and deploy your applications using Gradio and Hugging Face Spaces.
An instructor-led, live training course (online or onsite) for intermediate-level developers and AI practitioners on using LoRA to efficiently fine-tune large-scale models, especially in resource-constrained environments.
Explore all AI tools and technologies.
Explore courses by AI concepts and disciplines.
Browse courses organized by learning category.
Browse courses from Coursera, edX, Udemy, and more.
Search and filter across all AI and ML courses.
Find courses for your career path — data scientist, ML engineer, AI researcher, and more.
Start your AI journey with beginner-friendly courses.