Cooper Raterink

Cooper Raterink

AI Safety Research + Engineering

Cohere AI

I research large language model safety at Cohere, a Toronto-based startup building a developer-focused NLP platform. (We’re hiring! Shoot me an email at cooper at cohere dot ai and say hi πŸ™‚.) I recently graduated from the MSc computer science program at Stanford University, where I studied human-centered artificial intelligence and analyzed misinformation with the Stanford Internet Observatory as part of the Election Integrity Partnership.

My research and writing is generally at the intersection of technology and society. Current interests include AI safety, fairness, and ethics; the effects of AI on the online information ecosystem and influence operations; contrastive learning for natural language understanding; and safety and performance evaluation of large language generation models.

Outside of work and research, you can usually find me near the water 🌊🌊🌊

Education
  • MSc in Artificial Intelligence, 2020

    Stanford University

  • BSc in Electrical and Computer Engineering, 2018

    University of Texas at Austin

Publications

Technology

[Oct 2021] Safety Harness: I led the design and implementation of Cohere’s (work-in-progress) language model safety evaluation system, and wrote the linked documentation to share insights with our developers.

[Sep 2021] Mitigating harm in language models with conditional-likelihood filtration: this paper describes how the safety team at Cohere filters nuanced harmful text from huge training corpora using large pre-trained language models. Project led by Helen Ngo.

[Dec 2020] Short-Term Solar Irradiance Forecasting Using Calibrated Probabilistic Models: I was fortunate to participate in the AI for Climate Change research bootcamp at Stanford and contribute to the early stages of the linked research. Project led by Eric Zelikman.

Society

[July 2021] Assessing the safety risks of software written by artificial intelligence: A piece investigating the risk landscape around the application of large language models to code generation. I focus on OpenAI’s Codex model as a case study, as it was recently deployed to thousands of developers with Github Copilot. Published to Tech Policy Press and edited by Justin Hendrix.

[May 2021] Assessing the risks of language model β€œdeepfakes” to democracy: A piece containing my thoughts on why we didn’t see substantial use of text deepfakes during the 2020 US election and how risks may evolve alongside innovations in language modeling tech. Also published to Tech Policy Press.

[April 2021] Cohere Responsible Use Documentation: I was a major contributor to Cohere’s responsible use documentation, a series of webpages that outline usage guidelines, communicate the limitations and biases of our language models, host the Cohere model cards and data statements, and provide other essential information to Cohere Platform developers.

[Oct 2020] Then and Now: How Online Conversations About Voter Fraud Have Changed Since 2016: A brief investigation into how narratives about voter fraud were different during the 2016 and 2020 US presidential election. This was the second blog post I contributed to while an analyst with the Election Integrity Partnership; I was fortunate to collaborate with Renee Diresta, Ben Nimmo, and other researchers from the partnership on this particular post.

[Oct 2020] Seeking To Help and Doing Harm: The Case of Well-Intentioned Misinformation: A study about the prevalence of misinformation stemming from well-intentioned actors during the 2020 US presidential election. This was the first blog post I contributed to while an analyst with the Election Integrity Partnership; I was fortunate to collaborate with Emerson T. Brooking and other researchers from the partnership on this particular post.

[July 2020] Updating the Human Priors: Takeaways from teaching CS82SI Workshops for Wellness in Tech at Stanford during Spring 2020. In collaboration with the fantastic Nik Marda and Sonia Garcia.

Creative

[Jan 2021] Reanimating the Digital Zombie: A fun collaboration with writer and friend Jake Zawlacki - this piece stemmed from a late-night conversation about zombie theory, the phrase “digital zombies”, and our frustration with attention-grabbing media. Huge shoutouts to our editor Irene Han, Jason Zhao for helping to found the unbelievably cool Rewired magazine at Stanford, and my friend Rayan Krishnan for designing the beautiful and engaging website.

Teaching

While at Stanford, I TA’d CS109 and CS107, and lectured CS109 during the summer quarter of 2020. With funding from Well-being at Stanford, I designed and taught the Stanford course CS82SI: Workshops for Wellbeing in Tech in Spring 2020 together with my amazing coinstructors Sonia Garcia and Nik Marda. During the summers of 2019 and 2021, I taught Python and Javascript to brilliant high schoolers in Palestine as part of the Code.X Code For Palestine program.

I love to teach and am always interested in hearing about opportunities! πŸ€“