Mathematics and Computer Science Seminars

The Mathematics and Computer Science Seminar topics range widely, but typically focus on original research, technical exposition, snapshots of working life, or teaching.

Seminar Schedule (Fall 2018 - Spring 2019)

Seminar attendees are invited to gather at 15 minutes prior to the talk to partake of light refreshments and to socialize.

9/24, 2018

Honest Talk about Graduate School in Math, CS, and Physics

Profs. David Latimer, Rachel Pepper, Jake Price, Adam Smith, and Courtney Thatcher

University of Puget Sound

4pm, Thompson 391

10/1, 2018

Geodesics on Platonic Solids

Prof. Jayadev S. Athreya

Associate Professor and Director, Washington Experimental Mathematics Lab

University of Washington

4pm, Thompson 391


Abstract: In joint work with David Aulicino and Pat Hooper, we study the problem of finding closed geodesics passing through exactly one vertex on the surfaces of Platonic solids. We show that there are no such trajectories on any of the solids except the dodecahedron, on which there are 31 equivalence classes of such trajectories. The talk will be elementary and accessible to undergraduates.

10/22, 2018

Modeling Sales Opportunities for Microsoft

Kristine Jones, Ph.D.

Senior Data Science Technical Lead

Azure Data, Microsoft

4pm, Thompson 391


Abstract: This talk traces the development of a data science project at Microsoft over the course of 2.5 years – from initial problem statements to fully productized system, from idea formulation to patent filings.  We’ll touch on technical challenges large and small, competing business goals, and difficult tradeoffs.   Most importantly, we’ll discuss how people from different teams with different jobs and different backgrounds contributed to this project, and how this diversity of ideas was necessary for project success.

11/5, 2018

Applications of topology for information fusion

Emilie Purvine, Ph.D.

Data Scientist and Mathematician

Pacific Northwest National Laboratory

4pm, Thompson 391


Abstract: In the era of "big data" we are often overloaded with information from a variety of sources. Information fusion is important when different data sources provide information about the same phenomena. For example, news articles and social media feeds may both be providing information about current events. In order to discover a consistent world view, or a set of competing world views, we must understand how to aggregate, or "fuse", information from these different sources. In practice much of information fusion is done on an ad hoc basis, when given two or more specific data sources to fuse. For example, fusing two video feeds which have overlapping fields of view may involve coordinate transforms; merging GPS data with textual data may involve natural language processing to find locations in the text data and then projecting both sources onto a map visualization. But how does one do this in general? It turns out that the mathematics of sheaf theory, a domain within algebraic topology, provides a canonical and provably necessary language and methodology for general information fusion. In this talk I will motivate the introduction of sheaf theory through the lens of information fusion examples.


This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Approved for Public Release, Distribution Unlimited.

11/26, 2018

Opportunities in Scientific Computing: Outsourcing the Computation So We Can Do the Thinking...

Jake Price

Assistant Professor of Mathematics

University of Puget Sound

4pm, Thompson 391


Abstract: In the age of Numpy, WolframAlpha, and R (not to mention calculators), manual computation has gone the way of the dinosaurs. My research is in the field of scientific computing. That is, I study how we can use computers to help us answer scientific problems that would be exceptionally tedious or impossible to solve by hand. In this talk, I’ll share a few open projects in scientific computing that I am working on. First: How can we simulate precisely how a protein folds from one conformation to another? We’ll make use of probability theory to reframe this question in a more general way, and use concepts from statistics and parallel computing to think about computationally efficient ways to answer this. Second: How can we simulate systems that are way too big to even fit on our computer? This will draw on concepts from partial differential equations, Fourier analysis, and more!
If you are interested in any of these topics, I would love to present them to you and potentially talk more!

12/3, 2018

Deep Neural Networks, and Their Application to Scalograms of Laboratory Mouse Vocalizations

Adam Smith

Assistant Professor of Computer Science

University of Puget Sound

4pm, Thompson 391


Abstract: This will be a talk adapted from the presentation I made in 2017 at the 2017 IEEE International Conference on Bioinformatics and Biomedicine. I will start out with an introduction to deep neural networks: what they are and how they were adapted from the previous “shallow” neural networks. In particular a convolutional neural network (“CNN”) behaves much like the mammalian visual cortex, and has had great success in recent years in tasks with a visual or spatial component (such as in Google’s AlphaGo project). I will then show how we adapted a CNN to the task of identifying mouse vocalizations. This is a laborious task that traditionally takes hundreds of person-hours and yields inconsistent results. I will show our success using CNNs, and detail possible next steps that can be made.

2/19, 2019

A Higher Multivariate Chain Rule

Aidan Schumann

Department of Mathematics and Computer Science

University of Puget Sound

4pm, Thompson 395


Abstract: How do you take the repeated derivative of composed functions? For single-variable functions, the first few derivatives are readily computable by hand, but each consecutive derivative requires additional uses of the chain and product rules and quickly becomes overwhelming. The situation is even worse for the multivariate case. There has to be a better way than computing higher derivatives by hand. The solution for single-variable functions is to define a class of polynomials—Bell polynomials—which do all of the algebra and combinatorics for you, and use them to define the so-called Faá di Bruno Formula for taking repeated derivatives of composed, single-variable functions. However, there has been, to date, no generalization of Bell polynomials for use in the multivariate Faá di Bruno formula. In this talk, we introduce generalizations to Bell polynomials and Faá di Bruno formula to apply them to finding repeated derivatives of multivariate functions.