Daniel Yang

Let's make AI for the people.

  • CV:

  • prof_pic.jpg

    Welcome to my personal page! I am a Computer Science Ph.D. student at the University of Southern California. I am a member of the Signal Analysis and Interpretation Laboratory, advised by Professor Shrikanth Narayanan. My work is suppported by the NSF Graduate Research Fellowship.

    I am current interested in multimodal transformers, vision-language modeling, and multimodal media understanding. I believe there are is a lot to improve in how deep models process data from different modalities, and ultimately, wish to distill these ideas into inductive biases for AI models.

    Please feel free to reach out to me via any of my socials above if you are looking to collaborate!

    news

    May 13, 2024 Started a new position as an AIML research intern with Apple’s Siri and Information Intelligence team.
    Jun 2, 2023 “Context Unlocks Emotions: Text-based Emotion Classification Dataset Auditing with Large Language Models” accepted to ACII 2023.
    Aug 22, 2022 Began my Ph.D. program at the University of Southern California.
    Apr 3, 2022 Awarded the National Science Foundation Graduate Research Fellowship for artificial intelligence.

    selected publications

    1. ACII
      Context Unlocks Emotions: Text-based Emotion Classification Dataset Auditing with Large Language Models
      Daniel Yang, Aditya Kommineni, Mohammad Alshehri, and 4 more authors
      In Affective Computing and Intelligent Interaction (ACII), 2023
    2. TALSP
      A Study of Parallelizable Alternatives to Dynamic Time Warping for Aligning Long Sequences
      Daniel Yang, Kevin Ji, and TJ Tsai
      IEEE Transactions on Audio, Speech, and Language Processing (TASLP), 2022
    3. ISMIR
      Composer Classification with Cross-Modal Transfer Learning and Musically Informed Augmentation
      Daniel Yang, and T. J. Tsai
      In Proceedings of the International Society for Music Information Retrieval (ISMIR), Candidate for best student paper, 2021