David Fridovich-Keil

About me

selfie Hi there! I just finished my PhD at UC Berkeley, where I worked on autonomous robotics and optimal control, and was advised by Prof. Claire Tomlin in the Hybrid Systems Lab and the Berkeley AI Research lab. During my PhD, I was supported by an NSF Graduate Research Fellowship. I'm currently a postdoc in the same group, and will me moving to Stanford as a postdoc with Prof. Mac Schwager starting in Fall 2020. I'm excited to be joining the faculty of UT Austin's Department of Aerospace Engineering and Engineering Mechanics in August 2021.

Outside of research, I like to play squash and frisbee, read fantasy novels, and play acoustic guitar.

Contact info

The best way to reach me is by email. My address is: dfk at eecs dot berkeley dot edu.

Research interests

I am generally interested in optimal control, motion planning, and safe autonomy. Please have a look at my PhD dissertation, here. So far, I have primarily worked on game-theoretic techniques for motion planning that directly encode robustness to external disturbances and actions of other agents, and operate in real-time. My work so far falls into two main categories: efficient motion planning with reachability-based safety guarantees, and solving large-scale multi-player nonlinear differential games. I have also worked on a number of other projects related to reinforcement learning, distributed control, adaptive receding horizon control, and active search.

Efficient motion planning with safety guarantees

In motion planning and control, there is often a division between methods that work well in real time, and those that provide strict safety guarantees. For example, iterative LQR (iLQR) is a popular method of generating smooth optimal control sequences in real time for relatively high-dimensional systems; yet iLQR does not guarantee robustness, e.g. against environmental disturbances. By contrast, Hamilton-Jacobi (HJ) reachability can provide hard safety guarantees and disturbance rejection but for general nonlinear systems is only tractable in fairly low dimension.

FaSTrack is a new approach that uses an offline reachability computation to inform an online motion planner like iLQR to blend the best of these two types of algorithms. I have worked on several projects extending FaSTrack. In Planning, Fast and Slow, we broaden the concept of motion planning to allow for multiple different types of planning algorithms (with potentially different notions of state) to run concurrently and stitch together seamlessly while retaining the original FaSTrack safety guarantee. We have also tested a neural network-based HJ reachability solver and used it to compute conservative approximations to reachable sets for higher dimensional systems. Finally, I recently built a high-level graph-based wrapper around kinodynamic planners to extend the modular FaSTrack guarantees to a priori unknown environments while retaining recursive safety and liveness.

Confidence-aware prediction of another agent's motion

It's always nice to be able to trust that, when a motion planner says the system remains safe we can trust that, at runtime, the system will actually remain safe despite potential modeling errors. That's precisely why studying robust control is important. However, an autonomous system might also encounter other moving agents at runtime, and when this happens the system has to make some assumptions on how they might behave. Unfortunately, any model we might construct to predict the motion of these other agents will, in general, fail systematically in some situations; to be robust to this inevitability, we proposed a novel Bayesian framework for inferring our model confidence and dynamically adjusting predictions online to be more conservative whenever model confidence drops. We took a first step toward incorporating uncertain predictions of a pedestrian into FaSTrack in a recent paper at RSS 2018 and extended at IJRR, and we recently extended this approach to work with multiple pedestrians and multiple robots.

Solving large multi-player nonlinear differential games in real time

Differential games are a widely-applicable mathematical tool, and offer an attractive alternative to traditional formulations of motion planning problems. In particular, motion planning problems are often posed in either static environments or dynamic environments where the predicted motion of other agents is completely independent from the robot's planned trajectory. Unfortunately, this can put an undue burden on the predictive model to be precise despite enormous uncertainty. Differential game theory offers an exciting alternative; rather than fix a prediction beforehand, we can presume that other agents are optimizing some known (or estimated) objectives, and solve a differential game to find a local equilibrium. Effectively coupling prediction and planning, this approach shifts the enormous burden of making accurate predictions to the potentially more straightforward task of modeling short-term dynamic objectives.

Until relatively recently, differential games were widely considered to be computationally intractible for general nonlinear systems and multiple players with arbitrary objectives. Several approximation techniques have been explored in the literature, but to my knowledge none have been seriously considered in the industry. My own work in this area consists of a fast second-order solver based on iterative LQR--a standard algorithm for nonlinear MPC used in the autonomous vehicle industry. I am actively working on a real-time C++ implementation, available open-source. I've done some preliminary tests in the lab and should have a video to post soon from a sweet demo onboard a Boeing experimental aircraft! Stay tuned!

Links

  1. GitHub
  2. Google Scholar
  3. LinkedIn