Tatsunori Hashimoto

Researcher, Microsoft Semantic Machines

thashim [AT] stanford.edu

Bio

From fall 2019 to 2020 I will be at Microsoft Semantic Machines as a researcher.

Starting fall 2020, I will be starting as an assistant professor at the computer science department at Stanford.

Machine learning systems do well on their training domain, but often fail in dramatic and unexpected ways in the wild. I view these problems as coverage issues - systems can appear to do well even if they fail on rare examples (in prediction) or plagiarize from the training set (in generation). My work seeks to develop evaluations, representations, and training procedures to guarantee uniform, rather than just average case performance of machine learning models.

Previously, I was a post-doc at Stanford working for John C. Duchi and Percy Liang on tradeoffs between the average and worst-case performance of machine learning models. Before my post-doc, I was a graduate student at MIT co-advised by Tommi Jaakkola and David Gifford and a undergraduate student at Harvard in statistics and math advised by Edoardo Airoldi.

Publications

Most recent publications on Google Scholar.

Distributionally Robust Losses Against Mixture Covariate Shifts PDF

John C Duchi, Tatsunori B Hashimoto, Hongseok Namkoong

Preprint

Unifying Human and Statistical Evaluation for Natural Language Generation PDF

Tatsunori B Hashimoto*, Hugh Zhang*, Percy Liang

Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019)

Generating Sentences by Editing Prototypes PDF

Kelvin Guu*, Tatsunori B Hashimoto*, Yonatan Oren, Percy Liang

Transactions of the Association of Computational Linguistics (TACL, presented at ACL 2018)

Fairness Without Demographics in Repeated Loss Minimization PDF

Tatsunori B Hashimoto, Megha Srivastava, Hongseok Namkoong, Percy Liang

Proceedings of the 35th International Conference on Machine Learning (ICML 2018, Best paper runner up)

Word embeddings as metric recovery in semantic spaces PDF

Tatsunori B Hashimoto, David Alvarez-Melis, Tommi S Jaakkola

Transactions of the Association for Computational Linguistics 4 (TACL, presented at ACL 2016)

Metric recovery from directed unweighted graphs PDF

Tatsunori B Hashimoto, Yi Sun, Tommi Jaakkola

Artificial Intelligence and Statistics (AISTATS 2015), (best poster at NeurIPS 2014 workshop on networks)

Discovery of directional and nondirectional pioneer transcription factors by modeling DNase profile magnitude and shape PDF

Richard I Sherwood *, Tatsunori B Hashimoto *, Charles W O'donnell *, Sophia Lewis, Amira A Barkal, John Peter Van Hoff, Vivek Karun, Tommi Jaakkola, David K Gifford

Nature Biotechnology (2014)

Talks and slides

NeuralGen Workshop (NAACL 2019): Defining and Evaluating Diversity in Generation

Projects

Distributionally Robust Models
Training methods to make models perform uniformly well over a population.
Diverse Natural Language Generation
Methods for quantifying and improving the diversity of generation systems.
Representations from Random Walks
Understanding learned representations (such as word embeddings) through random walks.

Resume

Acknowledgement

This website uses the website design and template by Martin Saveski