ME 343 Homepage
Welcome to this class. We hope you will enjoy it!
This is the first time this class is offered by the Mechanical Engineering Department so we will be experimenting with the content a bit. Here is the tentative content. We will make some adjustments as we go depending on interest and time left:
- Gaussian process regression
- Support vector machine for classification; kernel machines
- Deep learning
- Recurrent Neural Network
- Generative Adversarial Networks (GAN)
- Physics-informed learning machines (a new method specific to ME!)
- Reinforcement learning
- Markov decision processes, Bellman equation, Monte-Carlo tree search, and dynamic programming
- Temporal-difference learning (if time allows)
The material for this class is hosted on github. It can be downloaded from the main repository page
https://github.com/stanford-me343/stanford-me343.github.io
If you click on the green button “Clone or download” you can download all the files as a zip archive.
Office hours
- Tuesday: 7 PM to 8 PM (Hojat)
- Wednesday: 10 AM to 11 AM (Ziyi/Hojat)
- Thursday: 10 AM to 11 AM (Ziyi)
- Friday: 9 AM to 11 AM (Prof. Darve)
Office hours with TAs are held in the Huang basement. Prof. Darve’s office hours are in building 520, room 125.
Course material and links
Reading material
Curated list of scientific machine learning papers from Paul Constantine.
Contributors: Nathan Baker, Jed Brown, Reagan Cronin, Ian Grooms, Jan Hesthaven, Des Higham, Katy Huff, Mark Kamuda, Julia Ling, Vasudeva Murthy, Houman Owhadi, Christoph Schwab.
Curation criteria:
- has ML, AI, Big Data, or related terms in the title
- comes from a scientific journal
- bias toward broad audience journals
- claims application to a scientific field or problem
- bias toward computational sciences
- bias toward recent publications
- bias toward perspective/prospective-type articles (e.g., “opportunities and challenges”) and surveys/reviews
- bias toward materials design, fluid dynamics, and some environmental sciences
- bias against arXiv papers and preprints
- bias against medicine and related fields
- bias against social sciences and related fields
- bias against fast algorithms or HPC implementations
General book about machine learning: The Hundred-Page Machine Learning Book, by Andriy Burkov. Relatively easy to read with a discussion of all the fundamental concepts. The book does not cover more advanced topics though.
Reading by topics
Reinforcement learning
GAN
- GAN Lab by M. Kahng, N. Thorat, D.H. Chau, F.B. Viegas, and M. Wattenberg
- GAN series, blog by Jonathan Hui
- An overview of gradient descent optimization algorithms, blog by Sebastian Ruder
- GAN tutorial
- Generative adversarial nets by I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio
- Wasserstein GAN by M. Arjovsky, S. Chintala, L. Bottou
- Improved training of Wasserstein GANs by I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. Courville
- InfoGAN: interpretable representation learning by information maximizing generative adversarial nets by X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, P. Abbeel
- Conditional generative adversarial nets by M. Mirza, S. Osindero
Deep learning
- LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep learning.” Nature 521.7553 (2015): 436.
- Deep learning by I. Goodfellow, Y. Bengio, and A. Courville
- Deep learning summer school, Montreal 2015, with many video presentations and tutorials
- Deep learning for perception, course from Virginia Tech
- Deep learning methods and applications, online book by L. Deng and D. Yu
- Neural networks and deep learning, online book by M. Nielsen
- “Optimization methods for large-scale machine learning,” by L. Bottou. F.E. Curtis, and J. Nocedal. This paper discusses among other things the stochastic gradient method.
SVM
GPR