第三届世界音乐人工智能大会The Third Summit on Music Intelligence (SOMI 2026)

37 2026-04-25
#音乐新闻
第三届世界音乐人工智能大会(The Third Summit on Music Intelligence)将于2026年4月25日至26日在北京中央音乐学院举办。

第三届世界音乐人工智能大会(The Third Summit on Music Intelligence)将于2026年4月25日至26日在北京中央音乐学院举办。大会将汇聚全球音乐人工智能领域的学术领军人物及音乐大模型领域的代表性企业,凝聚智慧、拓展视野,共同探索未来音乐的发展图景。会议将聚焦前沿技术进展与产业热点,搭建高水平交流平台,推动音乐人工智能在“产、学、研、用”各环节的深度融合,服务北京发展、助力国家战略,与世界携手一同开创音乐与智能融合的未来。


01

会议基本信息

会议时间:2026年4月25日—26日

会议地点:中央音乐学院(演奏厅)

主办单位:中国人工智能学会、中央音乐学院

承办单位:中国人工智能学会艺术与人工智能专委会、中央音乐学院音乐人工智能与音乐信息科技系


02

开幕式

2026年4月26日(Sun.)

09:00—11:35  中央音乐学院琴房楼演奏厅

Recital Hall of the Practice Building, CCOM

第三届世界音乐人工智能大会开幕式

SOMI2026 Opening Ceremony


主持人: 马华东

中国人工智能学会副理事长、北京邮电大学学术委员会副主任/讲席教授、国家级高层次人才

Ma Huadong

Vice President of the Chinese Association for Artificial Intelligence, Vice Chair of the Academic Committee and Chair Professor at Beijing University of Posts and Telecommunications, National High-Level Talent Awardee


大会主旨报告 Keynote Speech

· 管晓宏 Guan Xiaohong

音乐智能量化与认知的研究进展 Progress on Computational Intelligence and Quantitate Cognition of Music

· 克里斯·查菲 Chris Chafe聆听数据:以数据声化为音乐与科学打造定制化计算机音乐网络应用 Listening to Data: Creating Custom Computer Music Webapps for Music and Science through Data Sonification

· 乔治·海杜 Georg Hajdu从音乐厅到社会空间:借助技术重新语境化当代音乐 From Concert Hall to Social Space: Recontextualizing Contemporary Music through Technology

· 李小兵 Li Xiaobing机文主义:音乐学院的未来在哪里? Machinism: Where Is the Future of Music Conservatories?


03

会议日程安排

2026年4月25日(Sat.)

10:00—18:00 中央音乐学院西门

West Gate, CCOM

报到注册 Registration


14:00—16:00 中央音乐学院教学楼701

701, Academic Building, CCOM

第三届世界音乐人工智能大会青年论坛

SOMI2026 Youth Forum

· 戴琮人 Dai Congren音乐全谱理解基准:大模型对完整乐谱理解能力的评测与分析 Musical Score Understanding Benchmark: Evaluating Large Language Models' Comprehension of Complete Musical Scores

· 丘治平 Qiu Zhiping音频驱动的弦乐演奏动作生成 ELGAR: Expressive Cello Performance Motion Generation for Audio Rendition

· 童心怡 Tong Xinyi音画共鸣:视频配乐生成的视觉画面、时间节奏与音乐表达对齐 Video Echoed in Music: Semantic, Temporal, and Rhythmic Alignment for Video-to-Music Generation

· 吴尚达 Wu ShangdaCLaMP 3:跨未对齐模态与未见语言的通用音乐信息检索 CLaMP 3: Universal Music Information Retrieval Across Unaligned Modalities and Unseen Languages


16:00—17:00 中央音乐学院教学楼717

717, Academic Building, CCOM

中国人工智能学会艺术与人工智能专委会会议(闭门会议)

Meeting of the Art and Artificial Intelligence Technical Committee, Chinese Association for Artificial Intelligence(Closed-door Meeting)


2026年4月26日(Sun.)

09:00—11:35  中央音乐学院琴房楼演奏厅 Recital Hall of the Practice Building, CCOM

第三届世界音乐人工智能大会开幕式,领导致辞,主旨报告

SOMI2026 Opening Ceremony, Opening Remarks, and Keynote Speech


14:00—15:30 中央音乐学院琴房楼演奏厅

Recital Hall of the Practice Building, CCOM

第三届世界音乐人工智能大会学术论坛

SOMI2026 Academic Forum

· 刘家丰 Liu Jiafeng 统一声学 Token 空间的音乐生成大模型:从深层表征到高质量生成 Large-Scale Music Generation Model with a Unified Acoustic Token Space: From Deep Representations to High-Quality Generation

· 马军 Ma Jun 音乐脑机接口:概念、研究进展与应用前景 Music Brain–Computer Interfaces: Concepts, Research Progress, and Application Prospects

· 卢迪 Lu Di 面向 AI 生成音乐工作流的 Web DAW A Web-based DAW for AI-generated Music Workflow

· 肯尼斯·菲尔兹 Kenneth Fields

面向网络化电子音乐合奏的 AI 编程助手的实践应用

Practical Applications of AI Coding Assistants for Networked Electronic Music Ensembles

· 亚伦·威廉姆森 Aaron Williamon 表演科学的未来 The Future of Performance Science

· 凯特·霍普 Cat Hope 确保音乐领域人工智能政策的包容性与可持续性:一个治理议题 Ensuring Inclusive and Sustainable AI Policy for The Music Sector: A Governance Issue


15:40—17:00 中央音乐学院琴房楼演奏厅

Recital Hall of the Practice Building, CCOM

第三届世界音乐人工智能大会产业论坛

SOMI2026 Industry Forum

· 徐帆 Xu Fan 写歌,正在从创作变成选择吗? Is Songwriting Becoming Selection Rather Than Creation?

· 龚俊民 Gong Junmin 推动开源音乐生成的边界 Pushing the Boundaries of Open-Source Music Generation

· 姜涛 Jiang Tao 从一杯奶茶到音乐创作和消费的 Agent From a Cup of Milk Tea to Agents for Music Creation and Consumption

· 刘晓光 Liu Xiaoguang AI 赋能音乐教育 AI-Empowered Music Education


17:00—18:00 中央音乐学院琴房楼演奏厅

Recital Hall of the Practice Building, CCOM

第三届世界音乐人工智能大会主题交流、闭幕式

SOMI2026 Panel Discussion & Closing Ceremony


9:00—19:00 电子音乐马拉松(线上)

SOMI2026 Electronic Music Marathon (Online)


03

嘉宾介绍

主旨报告嘉宾

Keynote Speakers


管晓宏

Guan Xiaohong

管晓宏,中国科学院院士,IEEE Fellow,分别于1982、1985年获清华大学学士与硕士学位,1993年获美国康涅狄格大学博士学位;1993-1995年任美国PG&E公司高级顾问工程师,1999-2000年任哈佛大学访问科学家,1995年起任西安交通大学教授,2008-2025任电子与信息工程学院院长、电子与信息学部主任;自2001年任清华大学讲席教授组成员,2003-2008年任清华大学自动化系主任;中央音乐学院音乐人工智能与信息科学团队成员。管晓宏院士主要从事复杂网络化系统的经济性与安全性,电力、能源、制造系统优化,信息物理融合系统,网络空间信息安全等领域的研究,同时开展音乐智能量化和信息处理的研究,曾获2005年、2018年国家自然科学二等奖,2019年何梁何利科技进步奖及多项国际学术奖励。近年来,管晓宏院士与中央音乐学院、西安音乐学院合作,担任中央音乐学院博士生导师,探讨艺术与科学的关系和相互影响,在音乐智能量化领域取得重要研究成果,并创办了“艺术与科学的交汇”系列音乐会。

Guan Xiaohong, the member of Chinese Academy of Science and the Fellow of IEEE, received his B.S. and M.S. degrees from Tsinghua University, Beijing, China, in 1982 and 1985, respectively, and his Ph.D. degree from the University of Connecticut in 1993. He was a senior consulting engineer with Pacific Gas and Electric from 1993 to 1995. He visited the Division of Engineering and Applied Science, Harvard University from 1999 to 2000. From 1985 to 1988 and since 1995 he has been with Xian Jiaotong University, Xian, China as the Cheung Kong Professor of Systems Engineering since 1999, and from 2008 to 2025 as Dean of Faculty of Electronic and Information Engineering. From 2001 he has also been with the Center for Intelligent and Networked Systems, Tsinghua University, Beijing, China, and severed the Head of Department of Automation, Tsinghua University, 2003-2008. Professor Guan is also with Department of Music AI and Information Science, Central Conservatory of Music of China.


克里斯·查菲

Chris Chafe

现任斯坦福大学音乐系 Duca Family Professor of Music(杜卡家族音乐教授),并担任斯坦福大学计算机音乐与声学研究中心(CCRMA)主任。他是一位作曲家、即兴演奏家和大提琴演奏家,长期致力于音乐、计算技术、现场表演与声学研究交叉领域的探索,在国际计算机音乐领域具有重要影响力。Chafe 教授的研究涵盖计算机音乐、数字声音合成、实时演奏系统以及网络化音乐表演等方向,尤其在低延迟网络协作演奏与远程音乐表演方面具有开创性贡献。他的工作不断拓展技术条件下音乐创作、演奏协作与听觉感知的边界,同时也涉及声音在科学与医学场景中的应用。作为作曲家与演奏者,Chris Chafe 创作了大量融合器乐实践与技术实验的作品。其大提琴演奏与即兴背景深刻影响了他的艺术语言,使其作品始终保持鲜明的现场性、互动性与探索性。除斯坦福大学外,他还曾在英属哥伦比亚大学、都灵理工大学和柏林工业大学等国际机构担任访问学者或客座教授。凭借多年来在创作、科研、教学与学术平台建设方面的持续投入,Chris Chafe 已成为全球音乐科技与跨学科声音研究领域的重要代表人物之一。

Chris Chafe is a composer, improvisor, and cellist, developing much of his music alongside computer-based research. He is Director of Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). In 2019, he was International Visiting Research Scholar at the Peter Wall Institute for Advanced Studies The University of British Columbia, Visiting Professor at the Politecnico di Torino, and Edgard-Varèse Guest Professor at the Technical University of Berlin. At IRCAM (Paris) and The Banff Centre (Alberta), he has pursued methods for digital synthesis and network music performance. CCRMA’s jacktrip project involves live concertizing with musicians the world over. Chris Chafe is the Duca Family Professor of Music at Stanford University and Director of the Center for Computer Research in Music and Acoustics (CCRMA). A composer, improviser, and cellist, he is internationally recognized for his pioneering work at the

intersection of music, computation, performance, and acoustic research. At Stanford, his creative and scholarly activities have long explored how computer-based technologies can expand musical expression, collaboration, and listening. Professor Chafe’s research

spans computer music, digital sound synthesis, real-time performance systems, and networked music performance. He is especially known for developing methods for ultra-low-latency musical interaction over networks, helping to shape new possibilities for distributed ensemble performance and telematic music-making. His work has also engaged questions of auditory perception, human-computer interaction, and sound in scientific and medical contexts. As a composer and performer, Chafe has created a wide range of works that combine instrumental practice with technological experimentation. His background as a cellist and improviser has remained central to his artistic identity, informing a body of work in which live musicianship and computational processes interact closely. In addition to his activities at Stanford, he has held major international visiting appointments, including positions at the University of British Columbia, the Politecnico di Torino, and the Technical University of Berlin. Through decades of creative research, teaching, and institution-building, Chris Chafe has made lasting contributions to the global development of computer music and interdisciplinary sound studies. He continues to be an influential figure in shaping conversations around music technology, artistic innovation, and the future of musical performance.


乔治·海杜

Georg Hajdu

Georg Hajdu,德国作曲家、学者,现任德国汉堡音乐与戏剧学院(HfMT)教授,利盖蒂中心主任。他长期致力于音乐、科技与跨学科艺术实践的融合研究,关注多媒体作曲、网络化音乐、生成式音乐系统与数字乐谱等方向。Hajdu早年在科隆学习分子生物学与作曲,后于美国加州大学伯克利分校完成博士学位,并与CNMAT保持紧密关联,是当代音乐科技领域具有代表性的学者与作曲家之一。

Georg Hajdu is a German composer and scholar, currently serving as Professor of Multimedia Composition at the Hochschule für Musik und Theater Hamburg (HfMT). He also leads the Ligeti Center and serves as the university’s Commissioner for Research and Transfer. His work focuses on the intersection of music, technology, and interdisciplinary artistic practice, with particular interests in multimedia composition, networked music, generative systems, and digital score environments. Hajdu studied molecular biology and composition in Cologne and later earned his PhD at the University of California, Berkeley, in close connection with CNMAT.


李小兵

Li Xiaobing

李小兵,中央音乐学院教授、博士生导师、音乐人工智能系主任,国家哲学社会科学领军人才、中宣部“四个一批人才”、享受政府特殊津贴专家、国家社科重大项目首席专家、中国人工智能学会会士、艺术与人工智能专委会主任,中国计算机学会理事、计算艺术分会主任,“全国高校黄大年式教师团队”负责人。作曲博士、毕业于中央音乐学院作曲系,师从著名作曲家、中国音乐家协会名誉主席、中央音乐学院名誉院长吴祖强教授,音乐创作涵盖几乎所有音乐类型,部分作品受到群众喜爱具广泛影响力,曾荣获金钟奖、文华大奖、文华作曲奖、全国歌剧、舞剧一等奖、中宣部“五个一工程”奖等国内外奖项。

Professor and Doctoral Supervisor at the Central Conservatory of Music, Director of the Department of Music Artificial Intelligence, National Leading Talent in Philosophy and Social Sciences, recipient of the Central Propaganda Department’s "Four Kinds of Talents" award, expert entitled to special government allowances, Principal Investigator of major national social science projects, Fellow of the Chinese Association for Artificial Intelligence (CAAI) and Chair of the Art and Artificial Intelligence Commission, Council Member of the China Computer Federation (CCF) and Chair of the Computational Art Branch. He also leads the "National Huang Danian-style Faculty Team" in higher education. A Doctor of Composition, Li Xiaobing graduated from the Composition Department of the Central Conservatory of Music, where he studied under the renowned composer Professor Wu Zuqiang, Honorary President of the Chinese Musicians Association and the Central Conservatory of Music. His musical creations span almost all genres, with works enjoying wide popularity and significant influence. He has been honored with numerous domestic and international awards, including the Golden Bell Award, the Wenhua Grand Prize, the Wenhua Composition Award, first prizes in national opera and dance drama competitions, and the "Five One Project" Award from the Central Propaganda Department.


学术论坛嘉宾

Academic Forum Speakers


刘家丰

Liu Jiafeng

刘家丰,中央音乐学院音乐人工智能与音乐信息科技系副教授。博士毕业于中央音乐学院,中国首个音乐人工智能博士,师从俞峰教授、孙茂松教授。自幼跟随四川音乐学院钢琴系教授学习,本硕期间曾任校交响乐团首席钢琴。致力于研究多轨道音乐生成、音乐音频信号处理,多模态音乐大模型等前沿方向。提出了世界首个端到端交响乐生成模型,CCOM声源分离训练与推理框架的研发人,Sound Demixing Challenge 2023 国际音乐声源分离大赛冠军。

Jiafeng Liu is an Associate Professor in the Department of Music AI and Music Information Technology at the Central Conservatory of Music, as well as an AI researcher and pianist. He focuses on multi-track music generation, having proposed the world’s first end-to-end symphony generation model. He also conducts in-depth research in music source separation and won first place in the Sound Demixing Challenge 2023. Currently, he devotes his research efforts to large-scale multimodal music generation models.


马军

Ma Jun

马军,中央音乐学院音乐人工智能与音乐信息科技系讲师。博士毕业于北京大学神经科学研究所,并于圣路易斯华盛顿大学麻醉系完成博士后训练。拥有超过11年的侵入式与非侵入式脑机接口研发经验。现主要研究方向为音乐脑机接口、基于脑科学的个性化音乐治疗、音乐处理的神经机制。

Ma Jun is a Lecturer in the Department of Music Artificial Intelligence and Music Information Technology at the Central Conservatory of Music. He received his Ph.D. from the Institute of Neuroscience at Peking University and completed postdoctoral training in the Department of Anesthesiology at Washington University in St. Louis. He has more than 11 years of experience in the research and development of both invasive and non-invasive brain-computer interfaces. His current research focuses on music brain-computer interfaces, personalized music therapy based on neuroscience, and the neural mechanisms underlying music processing.


卢迪

Lu Di

卢迪,中央音乐学院音乐人工智能与音乐信息科技系助理研究员,东京大学情报理工学系硕士。国内首个商业歌声合成软件及首个自动作曲软件的核心开发者,拥有15年音乐+计算机交叉领域从科研到商业落地的全流程经验,持有多项软件著作权与专利。

Assistant Researcher at the Department of Music AI and Music Information Technology, Central Conservatory of Music. He holds a Master's degree from the Department of Information Science and Technology, the University of Tokyo. He is the core developer of China's first commercial singing voice synthesis software and the first automatic composition software, with 15 years of full-cycle experience from research to commercial deployment in the interdisciplinary field of music and computer science. He holds multiple software copyrights and patents.


肯尼斯·菲尔兹

Kenneth Fields

Kenneth Fields(博士)现任中国科学院大学媒体艺术教授,在音乐与科技领域拥有丰富经验。2003年至2023年,他曾任中央音乐学院外籍教授。2008年至2013年,他担任加拿大卡尔加里大学“远程媒体艺术”加拿大研究讲席教授。Fields教授现为《Organised Sound》(剑桥大学出版社)编委,以及电子音乐研究亚洲网络(EMSAN)编委。同时,他还是2021—2026年国际欧洲研究委员会(ERC)项目“The Digital Score”的联合首席研究者,该项目聚焦于音乐乐谱的本质与技术研究。

Kenneth Fields (Ph.D) is a Professor of Media Arts at the University of the Chinese Academy of Sciences, having rich experience in the field of Music and Technology. From 2003 to 2023, he was a Foreign Professor at the Central Conservatory of Music in Beijing. From 2008 to 2013, he was the Canada Research Chair in Telemedia Arts at the University of Calgary. Prof. Fields serves on the editorial boards of the Journal of Organized Sound (Cambridge Univ. Press) and the Electronic Music Studies Asia Network (EMSAN). He is Co-PI in the international 2021-26 European Research Council Grant, entitled: The Digital Score which is an investigation into the nature and technology of music scores.


亚伦·威廉姆森

Aaron Williamon

Aaron Williamon是Royal College of Music 表演科学教授,并担任表演科学中心(Centre for Performance Science, CPS)主任。该中心由皇家音乐学院与 Imperial College London 共同合作建立。他于2000年加入皇家音乐学院担任研究员,2004年晋升为高级研究员,并于2010年被任命为表演科学教授。他的研究主要关注高水平表演能力,以及将科学研究应用于音乐学习与教学的实践,同时也探讨音乐与艺术对社会的影响。Aaron 是国际表演科学研讨会(International Symposium on Performance Science)的创始人之一,同时也是学术期刊 Performance Science(隶属于 Frontiers )的创刊主编,并担任“健康音乐学院”(Healthy Conservatoires)国际网络的创始主席。该网络成立于2015年,旨在支持学生及专业表演艺术家的健康与福祉。他是 Royal Society of Arts 会士(FRSA)以及英国高等教育学会 AdvanceHE 会士(FHEA)。2008年,他被授予皇家音乐学院荣誉会员(HonRCM)。

Aaron Williamon is Professor of Performance Science at the Royal College of Music (RCM) where he directs the Centre for Performance Science (CPS), a partnership of the RCM and Imperial College London. Aaron joined the RCM as Research Fellow in 2000 and was appointed Senior Research Fellow in 2004 and Professor of Performance Science in 2010. His research focuses on skilled performance and applied scientific initiatives that inform music learning and teaching, as well as the impact of music and the arts on society. Aaron is the founder of the International Symposium on Performance Science, founding chief editor of Performance Science (a Frontiers journal), and the founding chair of Healthy Conservatoires, an international network constituted in 2015 to support health and wellbeing among student and professional performing artists. Aaron is a fellow of the Royal Society of Arts (FRSA) and the UK’s higher education academy, AdvanceHE (FHEA), and in 2008, he was elected an Honorary Member of the Royal College of Music (HonRCM).


凯特·霍普

Cat Hope

当代作曲家、电子音乐与数字乐谱研究者,长期从事实验音乐创作、非传统记谱与数字乐谱(Digital Score)研究。其学术与创作实践强调演奏者能动性、空间化记谱以及新型乐谱界面在当代音乐创作中的作用。曾担任澳大利亚国家级艺术与研究项目负责人,并在国际会议(如 TENOR、ICMC 等)和重要艺术机构中持续推动音乐、技术与文化政策的交叉研究。

Contemporary composition; Electronic music; Digital Score research; Experimental and non-traditional notation; Performer agency and spatialised notation; Music–technology–policy interdisciplinary research.


产业论坛嘉宾

Industry Forum Speakers


徐帆

Xu Fan

徐帆,目前在 Suno 从事生成式人工智能驱动的音乐创作产品与工程工作,负责将核心模型能力转化为面向用户的创作工具。作为早期创始团队一员,他参与了公司网页端与移动端产品从0到1的设计与开发。在此之前,他在 Meta 担任资深软件工程师,参与大规模数据系统以及 Meta Reality Labs 相关产品的开发。本科毕业于北京大学。

Xu Fan is currently working at Suno, where he focuses on generative AI–driven music creation products and engineering, translating core model capabilities into user-facing creative tools. As an early member of the founding team, he contributed to the design and development of the company’s web and mobile products from 0 to 1. Prior to this, he served as a Senior Software Engineer at Meta, where he worked on large-scale data systems as well as products related to Meta Reality Labs. He received his bachelor’s degree from Peking University.


龚俊民

Gong Junmin

龚俊民,ACE Studio 合伙人,ACE-Step 开源音乐生成模型系列作者。算法工程师出身,先后就职于多家头部科技公司,长期专注于音频与音乐生成方向。业余编曲、写词,与作词人方文山同门。一直相信做 AI 音乐最重要的不是模型本身,而是和真正懂音乐的人一起工作。

Gong Junmin is a partner at ACE Studio and the creator of the ACE-Step open-source music generation model series. With a background as an algorithm engineer, he has worked at several leading technology companies and has long focused on audio and music generation. In addition to his technical work, he is an amateur composer and lyricist, and is part of the same mentorship lineage as renowned lyricist Fang Wenshan. He firmly believes that the most critical factor in AI music is not the model itself, but collaborating with people who truly understand music.


姜涛

Jiang Tao

姜涛本、硕、博毕业于哈尔滨工业大学,有多年的AI和音频算法研发经验,及工程团队管理经验。先后在快手、腾讯音乐、昆仑万维组建了国内领先的音乐和音频算法团队,基于相关算法的产品功能已经服务于千万级用户。23年~24年期间,带领团队成为国内首家实现类suno音乐生成模型,并产品化服务于用户;在腾讯音乐期间,首创了K歌多维度评价、臻品音质等核心音乐消费功能,同时塑造了小天、小琴两款现象级虚拟歌手;在快手期间,完成了国内首个端到端AI音乐生成app(小森唱),作品原声、智能配乐、音悦台等核心创作和消费功能。有多段相关方向的创业经历。

Jiang Taoben received his bachelor’s, master’s, and doctoral degrees from Harbin Institute of Technology. He has many years of experience in AI and audio algorithm research and development, as well as in engineering team management. He has successively built leading music and audio algorithm teams in China at Kuaishou, Tencent Music, and Kunlun Wanwei, with products powered by these technologies serving tens of millions of users. Between 2023 and 2024, he led his team to become the first in China to develop a Suno-like music generation model and successfully productize it for users. During his time at Tencent Music, he pioneered core music consumption features such as multi-dimensional evaluation for karaoke and premium audio quality, and helped create two breakout virtual singers, Xiaotian and Xiaoqin. At Kuaishou, he developed China’s first end-to-end AI music generation app (Xiaosen Chang), featuring key creative and consumption functionalities such as original soundtrack generation, intelligent accompaniment, and music video platforms. He also has multiple entrepreneurial experiences in related fields.


刘晓光

Liu Xiaoguang

刘晓光,DeepMusic CEO,清华大学化学系本硕博,清华企业家协会青创会员编曲师,键盘手,吉他手。有100+首音乐作品创作及制作经验,作品全网播放量数亿次。有多年音基教育经验。

Liu Xiaoguang, CEO of DeepMusic Ph.D., M.S., and B.S. in Chemistry from Tsinghua University. Member of the Tsinghua Entrepreneur & Executive Club. Arranger, keyboardist, and guitarist. With over 100 original music compositions and production credits, his works have achieved hundreds of millions of streams across major platforms. Has years of experience in music fundamentals education.


青年论坛嘉宾

Youth Forum Speakers


戴琮人

Dai Congren

戴琮人,中央音乐学院与清华大学联合培养博士研究生(一年级在读),师从李小兵教授与孙茂松教授。此前先后获得纽芬兰纪念大学软件工程学士学位、伦敦国王学院数据科学理学硕士学位,以及伦敦帝国理工学院人工智能与机器学习研究型硕士学位。曾在多家企业从事人工智能与数据相关工作,包括于橡鹿机器人有限公司担任大模型算法工程师,在 Google 从事计算机视觉算法研发,并于英国 720 Management Ltd. 与万联证券担任数据分析师,在玉柴股份有限公司担任全栈工程师,具备跨领域的工程与研究经验。

Dai Congren is a first-year joint PhD student at the Central Conservatory of Music and Tsinghua University, supervised by Professor Xiaobing Li and Professor Maosong Sun. He previously received a Bachelor's degree in Software Engineering from Memorial University of Newfoundland, an MSc in Data Science from King's College London, and an MRes in Artificial Intelligence and Machine Learning from Imperial College London. He has worked in artificial intelligence and data-related roles across multiple organisations, including serving as an LLM Algorithm Engineer at Oak Deer Robotics Co., Ltd., conducting computer vision algorithm research and development at Google, working as a Data Analyst at 720 Management Ltd. in the United Kingdom and Wanlian Securities, and serving as a Full-Stack Engineer at Yuchai Co., Ltd. He has built broad interdisciplinary experience spanning both engineering and research.


丘治平

Qiu Zhiping

丘治平,中央音乐学院与清华大学联合培养在读博士研究生,师从中央音乐学院俞峰教授与清华大学戴琼海教授,长期致力于探索音乐与具身智能的交叉领域,涵盖从多模态感知、精细化动作生成到具身物理执行的完整路径。

Qiu Zhiping is a joint Ph.D. candidate at the Central Conservatory of Music and Tsinghua University, co-advised by Prof. Feng Yu and Prof. Qionghai Dai. He has long been dedicated to the intersection of music and embodied AI, covering the complete trajectory from multimodal perception and fine-grained motion generation to physical execution.


童心怡

Tong Xinyi

童心怡,中央音乐学院与北京大学联合培养在读博士 ,师从北京大学朱松纯教授与中央音乐学院俞峰教授,主要研究方向为多模态生成音乐生成,并致力于探索人工智能对音乐概念的建模,以及跨模态对齐艺术表达对齐问题 。以第一作者身份在AAAI(Oral Presentation)、CVPR及IEEE TCSS等国际学术会议与期刊发表多篇论文 ,并作为参与撰写出版教材《音乐的人工智能 U-V理论》,参与讲授北京大学通选课程 《人工智能与音乐》。曾荣获首届国际通用人工智能大会优秀成果奖、教育部中美青年创客大赛主赛道一等奖等,持续在人工智能与艺术交叉前沿探索深层融合的可能 。

Tong Xinyi is a joint Ph.D. candidate at the Central Conservatory of Music and Peking University, co-advised by Prof. Song-Chun Zhu and Prof. Feng Yu. Her primary research focuses on multimodal music generation, with a strong dedication to the AI-driven modeling of musical concepts and the cross-modal alignment of artistic expressions. She has published multiple papers in premier international conferences and journals, including AAAI (Oral Presentation), CVPR, and IEEE TCSS. Beyond her research, she co-authored the textbook Artificial Intelligence in Music: U-V Theory and co-lectures the general elective course "Artificial Intelligence and Music" at Peking University.


吴尚达

Wu Shangda

吴尚达博士现任职于国内领先的互联网企业,致力于语音大模型领域的算法研究。他于2025年6月毕业于中央音乐学院,获音乐人工智能与信息科技博士学位,师从清华大学孙茂松教授与中央音乐学院俞峰教授。此前,他分别于2021年和2019年获得中山大学软件工程硕士学位及星海音乐学院钢琴表演学士学位。他的研究长期深耕人工智能与音乐的交叉领域,尤其在音乐生成与音乐信息检索(MIR)方向产出了多项成果。作为第一或共同第一作者,他在ACL、NAACL、IJCAI、ICASSP及ISMIR等人工智能与音乐领域的国际顶尖会议及期刊上发表了多篇学术论文。其代表性工作包括CLaMP系列多模态检索模型、NotaGen及ChatMusician等。凭借出色的科研能力,他曾荣获2023年国际音乐信息检索大会(ISMIR 2023)最佳学生论文奖,并先后获评2024年国家研究生一等学业奖学金及2025年北京市优秀毕业生。在正式投身工业界研究之前,吴尚达博士曾先后在微软亚洲研究院(MSRA)、微软Azure Cloud及字节跳动Seed-Music实验室担任研究实习生。他致力于深化跨学科研究,旨在通过与领域专家的紧密合作,持续推动音乐人工智能这一前沿科技领域的发展与突破。

Dr. Shangda Wu is currently a research scientist at a leading domestic internet company, specializing in algorithm research for speech large language models. He obtained his Ph.D. in Music Artificial Intelligence and Information Technology from the Central Conservatory of Music in June 2025, where he was co-advised by Prof. Maosong Sun (Tsinghua University) and Prof. Feng Yu (Central Conservatory of Music). Dr. Wu holds a unique interdisciplinary background, having earned a Master of Science in Software Engineering from Sun Yat-sen University in 2021 and a Bachelor of Music in Piano Performance from the Xinghai Conservatory of Music in 2019. His research is deeply rooted in the intersection of AI and music, with significant contributions in music generation and music information retrieval (MIR). As a first or co-first author, he has published multiple papers in top-tier international conferences and journals in the fields of AI and acoustics, including ACL, NAACL, IJCAI, ICASSP, and ISMIR. His representative works include the CLaMP series of multimodal retrieval models, NotaGen, and ChatMusician. Recognized for his academic excellence, he was honored with the Best Student Paper Award at ISMIR 2023 and has been a recipient of the First-Class National Graduate Academic Scholarship (2024) and the title of Outstanding Graduate of Beijing (2025). Prior to his current role in the industry, Dr. Wu gained extensive experience as a research intern at Microsoft Research Asia (MSRA), Microsoft Azure Cloud, and ByteDance (Seed-Music). He is dedicated to advancing interdisciplinary research and aspires to push the frontiers of music AI through collaborative innovation with global experts.




组委会


大会主席

于红梅


共同主席

戴琼海


执行主席

李小兵


名誉主席

俞 峰


Jean-Michel Jarre


郭毅可院士


管晓宏院士


程序委员会(按姓氏笔画排序)

于阳 方恒健 王志鸥 孙茂松 邱志杰

吴玺宏 杨丽 栾家 钱琦

外事统筹

陶倩

工作委员会(按姓氏笔画排序)

于海波 马军 王晓庆 王雪莹 王文潇 卢迪 刘家丰 孙宇明

李茜茜 张渊 张昕然 谷美莲 周晴雯 周麟一 周昊天 赵艺璇 柴扉 高妍

志愿者(按姓氏笔画排序)

卜禹翔 亓佳宁 王楚旖 王文楚 王茜 王紫 王鑫琛 冯子骜 丘悦欣 刘俊汝 刘毅 刘恩洋 孙静茹 许玥童晖 芦乐妍 肖翔 杨佳一 杨婷絮 张博 陈菲 李小宁 李頔 李思麒 何紫怡 金戈 林向彬 林雨声 段晨 洪若希 赵雪丹妮 海纳 龚颉芸 黄千倪 黄都 黄文杰 隋林木 梁世杰 彭晨 蓝善美 魏圣普



新闻来源:中央音乐学院

相关新闻