I've compiled a collection of formatted notes and presentations from courses I've taken and TA'd while at Stanford. This also include miscellaneos notes and Pluto.jl notebooks that may be useful for studying.

Table of Contents

Presentations

Safe planning under uncertainty using surrogate models

PhD Defense, Stanford CS PhD, 2025

Details primary research contributions of my PhD relating to safe planning and safety validation.

Mini-lecture on SignalTemporalLogic.jl

AA228V/CS238V: Validation of Safety-Critical Systems, Stanford University, 2025

Mini-lecture on using SignalTemporalLogic.jl for property specification of safety-critical systems.

Agents for Safety-Critical Applications

Stanford Intelligent Systems Laboratory (SISL), 2023

Explains how POMDPs are applied to solve safety-critical problems in aviation, autonomous navigation, and geological sustainability.

Bayesian Safety Validation for Black-Box Systems

AIAA AVIATION Forum, 2023

Efficiently estimate probability of failure for safety-critical systems, applied to a neural network runway detector for an autonomous aircraft.

Letters to a Young Scientist: Annotated Lessons

CS239/AA229: Advance Topics in Sequential Decision Making, Stanford University, 2020

Annotated lessons from Edward O. Wilson's Letters to a Young Scientist.

Learning Policies with External Memory

CS239/AA229: Advanced Topics in Sequential Decision Making, Stanford University, 2020

Simplified VAPS algorithm for online stigmergic policies, from Peshkin et al. ICML, 2001.

Markov Decision Processes (MDPs)

Decision Making Under Uncertainty using POMDPs.jl, Julia Academy, 2021

Definition and example of the Markov decision process (MDP) for a grid world problem, part of Decision Making Under Uncertainty using POMDPs.jl.

Partially Observable Markov Decision Processes (POMDPs)

Decision Making Under Uncertainty using POMDPs.jl, Julia Academy, 2021

Definition and example of the partially observable Markov decision process (POMDP) for the crying baby problem, part of Decision Making Under Uncertainty using POMDPs.jl.

Beliefs: State Uncertainty

CS238/AA228: Decision Making Under Uncertainty, Stanford University, 2020

POMDPs, belief state representation, state uncertainty, particle filters, and Kalman filters.

Stanford Intelligent Systems Laboratory (SISL): An Overview

Stanford Center for Earth Resources Forecasting (SCERF), 2021

An overview of the research conducted at the Stanford Intelligent Systems Laboratory.

An Efficient Framework or Modular Autonomous Vehicle Risk Assessment (MAVRA)

IEEE International Conference on Intelligent Transportation Systems (ITSC), 2022

A framework for wfficiently estimating risk of autonomous vehicle policies in high-fidelity simulators.

Transfering Aviation Safety Lessons to the Road

The National Academies of Sciences, Engineering, and Medicine, 2021

How lessons from the safety validation of aviation software can be transfered to autonomous driving.

Adaptive Stress Testing of Trajectory Predictions in Flight Management Systems

IEEE/AIAA Digital Avionics Systems Conference, 2020

Black-box stress testing of an open-looped system with episodic reward.

A Bayesian Network Model of Pilot Response to TCAS RAs

Air Traffic Management Research and Development Seminar (ATM R&D Seminar), 2017

Collecting radar data to learn a Bayesian network pilot response model to aircraft collision avoidance advisories.

Using Julia as a Specification Language for Aircraft Collision Avoidance Systems (ACAS X)

JuliaCon, 2015

How the FAA is using the Julia programming language as a specification language for ACAS X.

Textbooks

I've re-typed and formatted several textbooks from Stanford course notes (the original authors are recognized within).

Probability for Computer Scientists

CS109: Probability for Computer Scientists, Stanford University, 2020

Counting, combinatorics (permutations and combinations), probability, conditional probability, independence, random variables, Bernoulli and binomial random variables, discrete distributions (Poisson, geometric, and negative binomial), continuous distributions, probability density function, cumulative density function, expectation and variance, uniform random variables, exponential random variables, the normal distribution, joint distributions, multinomial distribution, independent random variables, statistics of multiple random variables, conditional expectation, inference (Bayesian networks), continuous joint distributions (bivariate normal distribution), independent normal distributions, central limit theorem, sampling and the bootstrap method, maximum likelihood estimation, the beta distribution, maximum a posteriori, naive Bayes, linear regression and gradient ascent, and logistic regression.

Machine Learning

CS229: Machine Learning, Stanford University, 2021 [LaTeX code]

Machine learning, linear regression, least mean squares (LMS), logistic regression, classification, generalized linear models, ordinary least squares, generative learning algorithms, Gaussian discriminant analysis, Naive Bayes, Laplace smoothing, kernel methods, support vector machines (SVMs), deep learning, neural networks, backpropagation, regularization, cross validation, unsupervised learning, k-means clustering, the expectation-maximization (EM) algorithm, factor analysis, principle components analysis, independent component analysis, reinforcement learning and control, Markov decision processes, value iteration, and policy iteration.

Algorithms for Artificial Intelligence

CS221: Artificial Intelligence Principles and Techniques, Stanford University, 2021

Machine learning, linear predictors, loss minimization, linear regression, gradient descent, stochastic gradient descent, logistic regression, feature extraction, neural networks, efficient gradients, nearest neighbors, and Markov decision processes. (Work in progress)

Reinforcement Learning

CS234: Reinforcement Learning, Stanford University, 2021

Reinforcement learning, deep Q-learning, convolutional neural networks (CNNs), double deep Q-network (DDQN), dueling DQN, policy gradient, policy optimization, REINFORCE, and variance reduction. (Work in progress)

Course Notes

Review: Unconstrained Optimization

CS361/AA222: Engineering Design Optimization, Stanford University, 2020

Derivatives and gradients, bracketing, local descent, first-order methods, second-order methods, direct methods, stochastic methods, and population methods.

Review: Constrained Optimization

CS361/AA222: Engineering Design Optimization, Stanford University, 2020

Constraints, linear constrained optimization, multiobjective optimization, sampling plans, surrogate models, probabilistic surrogate models, surrogate optimization, and optimization under uncertainty.

Project: Constrained Optimization and Expression Optimization

CS361/AA222: Engineering Design Optimization, Stanford University, 2020

Linear constrained optimization using JuMP.jl and expression optimization for trinary star system motion using ExprOptimization.jl.

Implementation: Learning Policies with External Memory

CS239/AA229: Advanced Topics in Sequential Decision Making, Stanford University, 2020

An implementation of the simplified Value and Policy Search algorithm VAPS(1) originally presented by Peshkin et al. ICML, 2001.

Reinforcement Learning Algorithms and Equations

Robert J. Moss, 2020

Bellman equation, Q-learning, Sarsa, policy evaluation, policy iteration, value iteration, asynchronous value iteration (Gauss-Seidel value iteration), local approximation value iteration, trust region policy optimization (TRPO), actor-critic with experience replay, proximal policy optimization (PPO), Monte Carlo tree search, and the cross-entropy method.

Markov Decision Process: Chain Rule

Robert J. Moss, 2020

MDP graphical model, Markov property, and derivation of the state-action trajectory probability using the chain rule.

Loss Functions in Machine Learning

Robert J. Moss, 2020

Zero-one loss, hinge loss, and logistic loss generated using TeX.jl.

Deriving the Quadratic Formula

Robert J. Moss, 2020

Derivation of the quadratic formula using TeX.jl.