Ellie Pavlick Source Confirmed

Affiliation confirmed via AI analysis of OpenAlex, ORCID, and web sources.

High Impact

Researcher

John Brown University

faculty

35 h-index 175 pubs 7,699 cited

Is this your profile? Verify and claim your profile

Biography and Research Information

OverviewAI-generated summary

Dr. Ellie Pavlick's research encompasses natural language processing techniques, multimodal machine learning applications, and text readability. Her work addresses challenges such as domain adaptation and few-shot learning. Pavlick's contributions extend to practical applications, as demonstrated by recent work on automated data-driven information presentation of cancer treatment options for patients. She also co-authored a paper on BLOOM, a 176B-parameter open-access multilingual language model. Other publications investigate how well prompt-based models comprehend the meaning of their prompts, the symbols and grounding in large language models, and whether language models can encode perceptual structure without grounding. Pavlick is a faculty member at John Brown University. Her research focuses on topic modeling and advancing the capabilities of language models.

Metrics

  • h-index: 35
  • Publications: 175
  • Citations: 7,699

Selected Publications

  • Parallel trade-offs in human cognition and neural networks: The dynamic interplay between in-context and in-weight learning (2025) DOI
  • Does Training on Synthetic Data Make Models Less Robust? (2025) DOI
  • The dynamic interplay between in-context and in-weight learning in humans and neural networks. (2025) DOI
  • How Can Deep Neural Networks Inform Theory in Psychological Science? (2024) DOI
  • Characterizing Mechanisms for Factual Recall in Language Models (2023) DOI
  • Are Language Models Worse than Humans at Following Prompts? It’s Complicated (2023) DOI
  • Analyzing Modular Approaches for Visual Question Decomposition (2023) DOI
  • How Can Deep Neural Networks Inform Theory in Psychological Science? (2023) DOI
  • Unit Testing for Concepts in Neural Networks (2022) DOI
  • Do Prompt-Based Models Really Understand the Meaning of Their Prompts? (2022) DOI
  • “Was it “stated” or was it “claimed”?: How linguistic bias affects generative language models (2021) DOI
  • Does Vision-and-Language Pretraining Improve Lexical Grounding? (2021) DOI
  • Frequency Effects on Syntactic Rule Learning in Transformers (2021) DOI
  • Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color (2021) DOI
  • Spatial Language Understanding for Object Search in Partially Observed City-scale Environments (2021) DOI

Collaborators

Researchers in the database who share publications

Similar Researchers

Based on overlapping research topics