Explore responsible AI development with courses on ethics, bias, fairness, transparency, and safe AI deployment practices.
A comprehensive certificate program that equips professionals to lead the safe, secure, and responsible development and deployment of AI systems. The course covers the full AI lifecycle from generative AI fundamentals and architecture to governance, risk management, privacy, and cloud security through 10 modules.
An educational initiative funded by the EPSRC Impact Acceleration Account at Aston University. The project aims to foster an interdisciplinary approach to responsible AI in medical imaging.
This course explores the ethical challenges and complexities of AI's role in mental health, covering topics like bias, misinformation, privacy, and patient safety. It delves into advancements in computing, social robotics, and NLP techniques used in mental health analysis. The course is designed for mental health professionals, policymakers, and tech leaders.
A two-part series that guides public sector professionals through building responsible AI initiatives, focusing on managing risks and developing AI skills to transform government operations.
This course explores the ethics and responsible use of generative AI tools. Learners will engage with these tools with a focus on intentionality, sustainability, and responsibility, and learn to evaluate them using the SIFT process.
This course explores the best practices for responsible AI implementation in medicine. It is part of a specialized set of programs designed to train healthcare professionals to effectively use AI-driven tools, covering topics like bias in AI models, explainability, and the importance of human oversight.
An introductory course explaining the importance of responsible AI and how Google implements it in its products. It covers Google's 7 AI principles, providing a high-level understanding of ethical feature use.
You will build a binary classification machine learning model to predict if a person is looking for a new job or not. You'll go through the end to end machine learning project-- data collection, exploration, feature engineering, model selection, data transformation, model training, model evaluation and model explainability. We will brainstorm ideas throughout each step and by the end of the project you'll be able to explain which features determine if someone is looking for a new job or not.The template of this Jupyter Notebook can be applied to many other binary classification use cases. Questions like -- will X or Y happen, will a user choose A or B, will a person sign up for my product (yes or no), etc. You will be able to apply the concepts learned here to many useful projects throughout your organization!This course is best for those with beginner to senior level Python and Data Science understanding. For more beginner levels, feel free to dive in and ask questions along the way. For more advanced levels, this can be a good refresher on model explainability, especially if you have limited experience with this. Hopefully you all enjoy this course and have fun with this project!
Learn how to master responsible AI. Learn fairness, bias mitigation, explainable AI, and data privacy to design ethical AI systems. Future-proof your skills in trustworthy AI practices.
A presentation exploring the state-of-the-art applications of AI in medical imaging through the lens of Responsible AI. It addresses critical ethical considerations such as bias mitigation, transparency, accountability, and data privacy, along with regulatory and implementation challenges.
Explore all AI and machine learning topics.
Browse courses organized by learning category.
Browse courses from Coursera, edX, Udemy, and more.
Search and filter across all AI and ML courses.
Find courses for your career path — data scientist, ML engineer, AI researcher, and more.
Start your AI journey with beginner-friendly courses.