论坛简介
论坛时间
2022年03月03日(周四)10:00-12:00
论坛议程
01 主题报告
主持人:
吴保元 (香港中文大学深圳副教授)
嘉宾:
崔鹏 (清华大学副教授)
报告题目:
Stable Learning: Finding the Common Ground between Causal Inference and Machine Learning
稳定学习:寻找因果推理和机器学习之间的共同点
(已在Nature子刊发表 https://www.nature.com/articles/s42256-022-00445-z)
02 腾讯可信AI成果分享
主持人:
卞亚涛 (腾讯AI Lab高级研究员)
嘉宾:
吴秉哲 (腾讯AI Lab高级研究员)
王焕超 (腾讯研究院研究员)
03 圆桌论坛
主持人:
曹建峰 (腾讯研究院高级研究员)
嘉宾:
崔鹏 (清华大学副教授)
张吉豫 (中国人民大学未来法治研究院执行院长)
姚建华 (腾讯AI Lab AI医疗首席科学家)
议题:
1. 可信AI的重要性和价值
2. 可解释AI的现状、挑战和法律要求
3. 可解释性与AI公平性的实现
主办单位
- 深圳市大数据研究院
- 中国图象图形学学会
承办单位
- 腾讯 AI lab
- 腾讯研究院
协办单位
- 香港中文大学(深圳)数据科学学院
- IEEE Guangzhou Section Biometrics Council Chapter
报告形式
哔哩哔哩直播:http://live.bilibili.com/22947067
腾讯研究院视频号直播
报告人简介
崔鹏(清华大学教授)
崔鹏,清华大学副教授。他于2010年获得清华大学博士学位。他的研究兴趣包括因果推理和稳定学习、网络表征学习和社会动力学建模。他在机器学习、数据挖掘和多媒体领域的著名会议和期刊上发表了100多篇论文。他最近的研究获得了5项最佳论文奖,并分别入选了2014年和2016年的《KDD特刊》。他是IEEE TKDE, IEEE TBD, ACM TIST, ACM TOMM, DMKD和KAIS等的副主编,ACM CIKM19和MMM2020的项目联合主席。他是ACM和CCF的杰出成员,IEEE的高级成员。
Peng Cui is an Associate Professor with tenure in Tsinghua University. He got his PhD degree from Tsinghua University in 2010. His research interests include causal inference and stable learning, network representation learning, and social dynamics modeling. He has published more than 100 papers in prestigious conferences and journals in machine learning, data mining and multimedia. His recent research won 5 best paper awards and were selected into the Best of KDD special issues in 2014 and 2016 respectively. He is the Associate Editor of IEEE TKDE, IEEE TBD, ACM TIST, ACM TOMM, DMKD and KAIS etc., and the program co-chair of ACM CIKM19 and MMM2020. He is a Distinguished Member of ACM and CCF, and Senior Member of IEEE.
报告内容
Stable Learning: Finding the Common Ground between Causal Inference and Machine Learning
稳定学习:寻找因果推理和机器学习之间的共同点
在一个常见的机器学习问题中,使用训练数据集估计的模型,根据观察到的特征预测未来的结果值。当测试数据和训练数据来自相同的分布时,许多学习算法被提出并被证明是成功的。然而,对于给定的训练数据分布,性能最好的模型通常会利用特征之间的微妙统计关系,这使得它们在应用于测试分布与训练数据不同的数据时,可能更容易出现预测错误。如何开发对数据转移具有稳定性和鲁棒性的学习模型,对于学术研究和实际应用都至关重要。因果推理是指根据因果关系发生的条件得出结论的过程,是解释性和稳定性学习的一种强大的统计建模工具。在本次演讲中,我们将聚焦于稳定学习的最新进展,旨在从观测数据中探索因果知识,以提高机器学习算法的可解释性和稳定性。
Predicting future outcome values based on their observed features using a model estimated on a training data set in a common machine learning problem. Many learning algorithms have been proposed and shown to be successful when the test data and training data come from the same distribution. However, the best-performing models for a given distribution of training data typically exploit subtle statistical relationships among features, making them potentially more prone to prediction error when applied to test data whose distribution differs from that in training data. How to develop learning models that are stable and robust to shifts in data is of paramount importance for both academic research and real applications. Causal inference, which refers to the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect, is a powerful statistical modeling tool for explanatory and stable learning. In this talk, we focus on the latest progress of stable learning, aiming to explore causal knowledge from observational data to improve the interpretability and stability of machine learning algorithms.
招贤纳士
香港中文大学(深圳)、深圳市大数据研究院招聘人工智能安全与隐私方向的全职研究科学家、数据工程师、访问学生,以及博士后、2022年秋入学的博士研究生(人工智能安全与隐私、计算机视觉、机器学习等方向)。有关职位的更多信息,请点击招聘链接以获取更多信息。
腾讯AI Lab可信AI技术中心招聘正式员工与研究实习生(公平性,可解释性,鲁棒优化等方向)。有关职位的更多信息,请扫描下方二维码以获取更多信息。