面向高风险应用的机器学习(影印版)
Patrick Hall, James Curtis, Parul Pandey
出版时间:2024年03月
页数:468
“作者出色地概述了监管、风险管理、可解释性以及其他许多主题,同时提供了实用建议和代码示例。”
——Christoph Molnar
Interpretable Machine Learning的作者

“这本著作以其独特的战术方法来解决ML中的系统风险而脱颖而出。通过采取细致入微的方法来降低ML风险,本书为读者提供了宝贵的资源,帮助他们以负责任和可持续的方式成功部署ML系统。”
——Liz Grennan
麦肯锡公司副合伙人兼数字信托全球联合主管

过去十年见证了人工智能和机器学习(AI/ML)技术的广泛应用。然而,由于在广泛实施过程中缺乏监督,导致了一些本可以通过适当的风险管理来避免的事故和有害后果。在我们认识到AI/ML的真正好处之前,从业者必须了解如何降低其风险。
本书描述了负责任的AI方法,这是一种以风险管理、网络安全、数据隐私、应用社会科学方面的最佳实践为基础,用于改进AI/ML技术、业务流程、文化能力的综合性框架。作者Patrick Hall、James Curtis、Parul Pandey为那些希望帮助组织、消费者和公众改善实际AI/ML系统成果的数据科学家创作了这本指南。
● 学习负责任的AI技术方法,包括可解释性、模型验证和调试、偏差管理、数据隐私、ML安全
● 学习如何创建一套成功且有影响力的AI风险管理实践
● 获得关于采用AI技术的现有标准、法律、评估的基本指南,包括新的NIST AI风险管理框架
● 参与GitHub和Colab上的互动资源
  1. Foreword
  2. Preface
  3. Part I. Theories and Practical Applications of AI Risk Management
  4. 1. Contemporary Machine Learning Risk Management
  5. A Snapshot of the Legal and Regulatory Landscape
  6. Authoritative Best Practices
  7. AI Incidents
  8. Cultural Competencies for Machine Learning Risk Management
  9. Organizational Processes for Machine Learning Risk Management
  10. Case Study: The Rise and Fall of Zillow’s iBuying
  11. Resources
  12. 2. Interpretable and Explainable Machine Learning
  13. Important Ideas for Interpretability and Explainability
  14. Explainable Models
  15. Post Hoc Explanation
  16. Stubborn Difficulties of Post Hoc Explanation in Practice
  17. Pairing Explainable Models and Post Hoc Explanation
  18. Case Study: Graded by Algorithm
  19. Resources
  20. 3. Debugging Machine Learning Systems for Safety and Performance
  21. Training
  22. Model Debugging
  23. Deployment
  24. Case Study: Death by Autonomous Vehicle
  25. Resources
  26. 4. Managing Bias in Machine Learning
  27. ISO and NIST Definitions for Bias
  28. Legal Notions of ML Bias in the United States
  29. Who Tends to Experience Bias from ML Systems
  30. Harms That People Experience
  31. Testing for Bias
  32. Mitigating Bias
  33. Case Study: The Bias Bug Bounty
  34. Resources
  35. 5. Security for Machine Learning
  36. Security Basics
  37. Machine Learning Attacks
  38. General ML Security Concerns
  39. Countermeasures
  40. Case Study: Real-World Evasion Attacks
  41. Resources
  42. Part II. Putting AI Risk Management into Action
  43. 6. Explainable Boosting Machines and Explaining XGBoost
  44. Concept Refresher: Machine Learning Transparency
  45. The GAM Family of Explainable Models
  46. XGBoost with Constraints and Post Hoc Explanation
  47. Resources
  48. 7. Explaining a PyTorch Image Classifier
  49. Explaining Chest X-Ray Classification
  50. Concept Refresher: Explainable Models
  51. Explainable Models
  52. Training and Explaining a PyTorch Image Classifier
  53. Conclusion
  54. Resources
  55. 8. Selecting and Debugging XGBoost Models
  56. Concept Refresher: Debugging ML
  57. Selecting a Better XGBoost Model
  58. Sensitivity Analysis for XGBoost
  59. Residual Analysis for XGBoost
  60. Remediating the Selected Model
  61. Conclusion
  62. Resources
  63. 9. Debugging a PyTorch Image Classifier
  64. Concept Refresher: Debugging Deep Learning
  65. Debugging a PyTorch Image Classifier
  66. Conclusion
  67. Resources
  68. 10. Testing and Remediating Bias with XGBoost
  69. Concept Refresher: Managing ML Bias
  70. Model Training
  71. Evaluating Models for Bias
  72. Remediating Bias
  73. Conclusion
  74. Resources
  75. 11. Red-Teaming XGBoost
  76. Concept Refresher
  77. Model Training
  78. Attacks for Red-Teaming
  79. Conclusion
  80. Resources
  81. Part III. Conclusion
  82. 12. How to Succeed in High-Risk Machine Learning
  83. Who Is in the Room?
  84. Science Versus Engineering
  85. Evaluation of Published Results and Claims
  86. Apply External Standards
  87. Commonsense Risk Mitigation
  88. Conclusion
  89. Resources
  90. Index
书名:面向高风险应用的机器学习(影印版)
国内出版社:东南大学出版社
出版时间:2024年03月
页数:468
书号:978-7-5766-1291-2
原版书书名:Machine Learning for High-Risk Applications
原版书出版商:O'Reilly Media
Patrick Hall
 
Patrick Hall是BNH.AI的首席科学家,也是华盛顿大学的客座教授。
 
 
James Curtis
 
James Curtis是Solea Energy的量化研究员。
 
 
Parul Pandey
 
Parul Pandey是H2O.ai的首席数据科学家。
 
 
The animal on the cover of Machine Learning for High-Risk Applications is the giant African fruit beetle (Mecynorrhina polyphemus).

Formerly classified under the Latin name Chelorrhina polyphemus, this large, green scarab beetle is a member of the Cetoniinae family of flower chafers, a group of brightly colored beetles that feed primarily on flower pollen, nectar, and petals, as well as fruits and tree sap. Ranging from 35 to 80 mm in length, giant African fruit beetles are the largest beetles in the genus Mecynorrhina.

These colossal scarabs are found in the dense tropical forests of Central Africa. The adults are sexually dimorphic, with the females having a shiny, prismatic carapace, and the males having antlers and a more velvety or matte coloration. As attractive and relatively easy-to-raise beetles, they make popular pets among aspiring entomologists. This fact, along with habitat destruction, has been cited by at least one study as a factor in population declines in some areas, though they remain common overall.
Many of the animals on O’Reilly covers are endangered; all of them are important to the world.
购买选项
定价:138.00元
书号:978-7-5766-1291-2
出版社:东南大学出版社