正在加载图片...
·586· 智能系统学报 第15卷 tors[EB/OL].(2017-11-01)[2020-07-07]https:/arxiv tributes of large-scale image datasets [EB/OL].(2019-06- ore/abs/1702.06081,2017. 04)2020-07-07 https:/axiv.org/abs/1905.01347,2019. [44]CORBETT-DAVIES S,GOEL S.The measure and mis- [56]YANG Kaiyu.QINAMI K.FEI-FEI L.et al.Towards measure of fairness:a critical review of fair machine fairer datasets:filtering and balancing the distribution of learning [DB/OL].(2018-08-14)[2020-07-07]https://arx- the people subtree in the ImageNet hierarchy[C]//Pro- iv.org/abs/1808.00023.2018. ceedings of the 2020 Conference on Fairness,Accountab- [45]KANNAN S,KEARNS M,MORGENSTERN J,et al. ility,and Transparency.New York,USA,2020:547-558. Fairness incentives for myopic agents[C]//Proceedings of [57]BORDIA S,BOWMAN S R.Identifying and reducing the 2017 ACM Conference on Economics and Computa- gender bias in word-level language models[Cl//Proceed- tion.New York,USA,2017:369-386. ings of the 9th American Chapter of the Association for [46]CORBETT-DAVIES S,PIERSON E,FELLER A,et al. Computational Linguistics.Minneapolis,Minnesota, Algorithmic decision making and the cost of fair- 2019:7-15. ness[Cl//Proceedings of the 23rd ACM SIGKDD Interna- [58]GREEN B,CHEN Yiling.Disparate interactions:an al- tional Conference on Knowledge Discovery and Data gorithm-in-the-loop analysis of fairness in risk assess- Mining.New York,USA,2017:797-806. ments[Cl//Proceedings of the Conference on Fairness,Ac- [47]D'AMOUR A,SRINIVASAN H,ATWOOD J,et al. countability,and Transparency.Atlanta,USA,2019: Fairness is not static:deeper understanding of long term 90-99. fairness via simulation studies[C]//Proceedings of the [59]SONG Jiaming,KALLURI P,GROVER A,et al.Learn- 2020 Conference on Fairness,Accountability,and Trans- ing Controllable Fair Representations[C]//Proceedings of parency.Barcelona,Spain,2020:525-534. the 22nd International Conference on Artificial Intelli- [48]Google/ml-fairness-gym[EB/OL].[2020-07-26] gence and Statistics.Naha,Japan,2019:2164-2173. [60]LIU L T,DEAN S,ROLF E,et al.Delayed impact of fair https://github.com/google/ml-fairness-gym/ machine learning[C]//Proceedings of the 35th Internation- [49]KUPPAM S,MCKENNA R.PUJOL D,et al.Fair de- al Conference on Machine Learning.Stockholm,Sweden, cision making using privacy-protected data [DE/OL]. 2018:3150-3158 (2020-01-24)[2020-08-07]https:/∥arxiv.org/abs/1905 作者简介: 12744,2020. [50]SLACK D,FRIEDLER S A,GIVENTAL E.Fairness 邓蔚,讲师,博士后,主要研究方 warnings and fair-MAML:learning fairly with minimal 向为知识图谱、机器行为学、计算社会 data[C]//Proceedings of the 2020 Conference on Fairness, 科学与算法伦理。近年来参与国家自 然科学基金重点项目、国家重点研发 Accountability,and Transparency.Barcelona,Spain, 计划等国家级项目3项。申请国家发 2019:200-209. 明专利10余项,发表学术论文30余 [51]GANCHEV K,KEARNS M,NEVMYVAKA Y,et al. 篇,出版学术著作1部。 Censored exploration and the dark pool problem[C]//Pro- ceedings of the 25th Conference on Uncertainty in Artifi- 邢钰晗,硕土研究生,主要研究方 cial Intelligence.Arlington,USA,2009:185-194. 向为公平性机器学习和数据科学。 [52]DONAHUE K,KLEINBERG J.Fairness and utilization in allocating resources with uncertain demand[C]//Pro- ceedings of the 2020 Conference on Fairness,Accountab- ility,and Transparency.New York,USA,2020:658-668 [53]DEVRIES T.MISRA I.WANG C,et al.2019.Does ob ject recognition work for everyone?[EB/OL].(2019-06- 王国胤,教授,博士生导师,重庆 18)[2020-07-07]https:/axiv.org/abs/1906.02659,2019 邮电大学副校长,研究生院院长,人工 [54]STOCK P,CISSE M.ConvNets and ImageNet beyond 智能学院院长,中国人工智能学会副 accuracy:understanding mistakes and uncovering 理事长,主要研究方向为粗糙集、粒计 biases[C]//Proceedings of the 15th European Conference 算和认知计算。近年来承担多个国家 on Computer Vision.Munich,Germany,2018:498-512. 重点研发计划、国家自然科学基金重 [55]DULHANTY C,WONG A.Auditing imageNet:towards 点项目等。发表学术论文300余篇, a model-driven framework for annotating demographic at- 出版专著10余部。tors [EB/OL]. (2017-11-01)[2020-07-07] https://arxiv. org/abs/1702.06081, 2017. CORBETT-DAVIES S, GOEL S. The measure and mis￾measure of fairness: a critical review of fair machine learning [DB/OL]. (2018-08-14)[2020-07-07] https://arx￾iv.org/abs/1808. 00023, 2018. [44] KANNAN S, KEARNS M, MORGENSTERN J, et al. Fairness incentives for myopic agents[C]//Proceedings of the 2017 ACM Conference on Economics and Computa￾tion. New York, USA, 2017: 369−386. [45] CORBETT-DAVIES S, PIERSON E, FELLER A, et al. Algorithmic decision making and the cost of fair￾ness[C]//Proceedings of the 23rd ACM SIGKDD Interna￾tional Conference on Knowledge Discovery and Data Mining. New York, USA, 2017: 797−806. [46] D’AMOUR A, SRINIVASAN H, ATWOOD J, et al. Fairness is not static: deeper understanding of long term fairness via simulation studies[C]//Proceedings of the 2020 Conference on Fairness, Accountability, and Trans￾parency. Barcelona, Spain, 2020: 525−534. [47] Google/ml-fairness-gym[EB/OL]. [2020-07-26] https://github.com/google/ml-fairness-gym/. [48] KUPPAM S, MCKENNA R, PUJOL D, et al. Fair de￾cision making using privacy-protected data [DE/OL]. (2020-01-24)[2020-08-07] https://arxiv.org/abs/1905. 12744, 2020. [49] SLACK D, FRIEDLER S A, GIVENTAL E. Fairness warnings and fair-MAML: learning fairly with minimal data[C]//Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Barcelona, Spain, 2019: 200−209. [50] GANCHEV K, KEARNS M, NEVMYVAKA Y, et al. Censored exploration and the dark pool problem[C]//Pro￾ceedings of the 25th Conference on Uncertainty in Artifi￾cial Intelligence. Arlington, USA, 2009: 185−194. [51] DONAHUE K, KLEINBERG J. Fairness and utilization in allocating resources with uncertain demand[C]//Pro￾ceedings of the 2020 Conference on Fairness, Accountab￾ility, and Transparency. New York, USA, 2020: 658−668. [52] DEVRIES T, MISRA I, WANG C, et al. 2019. Does ob￾ject recognition work for everyone? [EB/OL]. (2019-06- 18)[2020-07-07] https://arxiv.org/abs/1906.02659, 2019. [53] STOCK P, CISSE M. ConvNets and ImageNet beyond accuracy: understanding mistakes and uncovering biases[C]//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany, 2018: 498−512. [54] DULHANTY C, WONG A. Auditing imageNet: towards a model-driven framework for annotating demographic at- [55] tributes of large-scale image datasets [EB/OL]. (2019-06- 04)[2020-07-07] https://arxiv.org/abs/1905.01347, 2019. YANG Kaiyu, QINAMI K, FEI-FEI L, et al. Towards fairer datasets: filtering and balancing the distribution of the people subtree in the ImageNet hierarchy[C]//Pro￾ceedings of the 2020 Conference on Fairness, Accountab￾ility, and Transparency. New York, USA, 2020: 547−558. [56] BORDIA S, BOWMAN S R. Identifying and reducing gender bias in word-level language models[C]//Proceed￾ings of the 9th American Chapter of the Association for Computational Linguistics. Minneapolis, Minnesota, 2019: 7−15. [57] GREEN B, CHEN Yiling. Disparate interactions: an al￾gorithm-in-the-loop analysis of fairness in risk assess￾ments[C]//Proceedings of the Conference on Fairness, Ac￾countability, and Transparency. Atlanta, USA, 2019: 90−99. [58] SONG Jiaming, KALLURI P, GROVER A, et al. Learn￾ing Controllable Fair Representations[C]//Proceedings of the 22nd International Conference on Artificial Intelli￾gence and Statistics. Naha, Japan, 2019: 2164−2173. [59] LIU L T, DEAN S, ROLF E, et al. Delayed impact of fair machine learning[C]//Proceedings of the 35th Internation￾al Conference on Machine Learning. Stockholm, Sweden, 2018: 3150−3158. [60] 作者简介: 邓蔚,讲师,博士后,主要研究方 向为知识图谱、机器行为学、计算社会 科学与算法伦理。近年来参与国家自 然科学基金重点项目、国家重点研发 计划等国家级项目 3 项。申请国家发 明专利 10 余项,发表学术论文 30 余 篇,出版学术著作 1 部。 邢钰晗,硕士研究生,主要研究方 向为公平性机器学习和数据科学。 王国胤,教授,博士生导师,重庆 邮电大学副校长,研究生院院长,人工 智能学院院长,中国人工智能学会副 理事长,主要研究方向为粗糙集、粒计 算和认知计算。近年来承担多个国家 重点研发计划、国家自然科学基金重 点项目等。发表学术论文 300 余篇, 出版专著 10 余部。 ·586· 智 能 系 统 学 报 第 15 卷
<<向上翻页
©2008-现在 cucdc.com 高等教育资讯网 版权所有