001476050 000__ 06910cam\\22006377a\4500 001476050 001__ 1476050 001476050 003__ OCoLC 001476050 005__ 20231003174630.0 001476050 006__ m\\\\\o\\d\\\\\\\\ 001476050 007__ cr\un\nnnunnun 001476050 008__ 230819s2023\\\\cau\\\\\ob\\\\001\0\eng\d 001476050 019__ $$a1394134987 001476050 020__ $$a9781484293065$$q(electronic bk.) 001476050 020__ $$a1484293061$$q(electronic bk.) 001476050 020__ $$z1484293053 001476050 020__ $$z9781484293058 001476050 0247_ $$a10.1007/978-1-4842-9306-5$$2doi 001476050 035__ $$aSP(OCoLC)1394118956 001476050 040__ $$aEBLCP$$beng$$cEBLCP$$dYDX$$dGW5XE$$dOCLCO 001476050 049__ $$aISEA 001476050 050_4 $$aQ334.7 001476050 08204 $$a174/.90063$$223/eng/20230828 001476050 1001_ $$aDuke, Toju. 001476050 24510 $$aBuilding responsible AI algorithms :$$ba framework for transparency, fairness, safety, privacy, and robustness /$$cToju Duke. 001476050 260__ $$aBerkeley, CA :$$bApress L. P.,$$c2023. 001476050 300__ $$a1 online resource (196 p.) 001476050 500__ $$aUse Pretrained Models and Cloud APIs 001476050 504__ $$aIncludes bibliographical references and index. 001476050 5050_ $$aIntro -- Table of Contents -- About the Author -- About the Technical Reviewer -- Introduction -- Part I: Foundation -- Chapter 1: Responsibility -- Avoiding the Blame Game -- Being Accountable -- Eliminating Toxicity -- Thinking Fairly -- Protecting Human Privacy -- Ensuring Safety -- Summary -- Chapter 2: AI Principles -- Fairness, Bias, and Human-Centered Values -- Google -- The Organisation for Economic Cooperation and Development (OECD) -- The Australian Government -- Transparency and Trust -- Accountability -- Social Benefits -- Privacy, Safety, and Security -- Summary 001476050 5058_ $$aChapter 3: Data -- The History of Data -- Data Ethics -- Ownership -- Data Control -- Transparency -- Accountability -- Equality -- Privacy -- Intention -- Outcomes -- Data Curation -- Best Practices -- Annotation and Filtering -- Rater Diversity -- Synthetic Data -- Data Cards and Datasheets -- Model Cards -- Tools -- Alternative Datasets -- Summary -- Part II: Implementation -- Chapter 4: Fairness -- Defining Fairness -- Equalized Odds -- Equal Opportunity -- Demographic Parity -- Fairness Through Awareness -- Fairness Through Unawareness -- Treatment Equality -- Test Fairness 001476050 5058_ $$aCounterfactual Fairness -- Fairness in Relational Domains -- Conditional Statistical Parity -- Types of Bias -- Historical Bias -- Representation Bias -- Measurement Bias -- Aggregation Bias -- Evaluation Bias -- Deployment Bias -- Measuring Fairness -- Fairness Tools -- Summary -- Chapter 5: Safety -- AI Safety -- Autonomous Learning with Benign Intent -- Human Controlled with Benign Intent -- Human Controlled with Malicious Intent -- AI Harms -- Discrimination, Hate Speech, and Exclusion -- Information Hazards -- Misinformation Harms -- Malicious Uses -- Human-Computer Interaction Harms 001476050 5058_ $$aEnvironmental and Socioeconomic Harms -- Mitigations and Technical Considerations -- Benchmarking -- Summary -- Chapter 6: Human-in-the-Loop -- Understanding Human-in-the-Loop -- Human Annotation Case Study: Jigsaw Toxicity Classification -- Rater Diversity Case Study: Jigsaw Toxicity Classification -- Task Design -- Measures -- Results and Conclusion -- Risks and Challenges -- Summary -- Chapter 7: Explainability -- Explainable AI (XAI) -- Implementing Explainable AI -- Data Cards -- Model Cards -- Open-Source Toolkits -- Accountability -- Dimensions of AI Accountability -- Governance Structures 001476050 5058_ $$aData -- Performance Goals and Metrics -- Monitoring Plans -- Explainable AI Tools -- Summary -- Chapter 8: Privacy -- Privacy Preserving AI -- Federated Learning -- Digging Deeper -- Differential Privacy -- Differential Privacy and Fairness Tradeoffs -- Summary -- Chapter 9: Robustness -- Robust ML Models -- Sampling -- Bias Mitigation (Preprocessing) -- Data Balancing -- Data Augmentation -- Cross-Validation -- Ensembles -- Bias Mitigation (In-Processing and Post-Processing) -- Transfer Learning -- Adversarial Training -- Making Your ML Models Robust -- Establish a Strong Baseline Model 001476050 506__ $$aAccess limited to authorized users. 001476050 520__ $$aThis book introduces a Responsible AI framework and guides you through processes to apply at each stage of the machine learning (ML) life cycle, from problem definition to deployment, to reduce and mitigate the risks and harms found in artificial intelligence (AI) technologies. AI offers the ability to solve many problems today if implemented correctly and responsibly. This book helps you avoid negative impacts that in some cases have caused loss of life and develop models that are fair, transparent, safe, secure, and robust. The approach in this book raises your awareness of the missteps that can lead to negative outcomes in AI technologies and provides a Responsible AI framework to deliver responsible and ethical results in ML. It begins with an examination of the foundational elements of responsibility, principles, and data. Next comes guidance on implementation addressing issues such as fairness, transparency, safety, privacy, and robustness. The book helps you think responsibly while building AI and ML models and guides you through practical steps aimed at delivering responsible ML models, datasets, and products for your end users and customers. What You Will Learn Build AI/ML models using Responsible AI frameworks and processes Document information on your datasets and improve data quality Measure fairness metrics in ML models Identify harms and risks per task and run safety evaluations on ML models Create transparent AI/ML models Develop Responsible AI principles and organizational guidelines. 001476050 588__ $$aOnline resource; title from PDF title page (SpringerLink, viewed August 28, 2023). 001476050 650_0 $$aArtificial intelligence$$xMoral and ethical aspects. 001476050 650_0 $$aMachine learning$$xMoral and ethical aspects. 001476050 650_6 $$aIntelligence artificielle$$xAspect moral. 001476050 650_6 $$aApprentissage automatique$$xAspect moral. 001476050 655_0 $$aElectronic books. 001476050 77608 $$iPrint version:$$aDuke, Toju$$tBuilding Responsible AI Algorithms$$dBerkeley, CA : Apress L. P.,c2023$$z9781484293058 001476050 852__ $$bebk 001476050 85640 $$3Springer Nature$$uhttps://univsouthin.idm.oclc.org/login?url=https://link.springer.com/10.1007/978-1-4842-9306-5$$zOnline Access$$91397441.1 001476050 909CO $$ooai:library.usi.edu:1476050$$pGLOBAL_SET 001476050 980__ $$aBIB 001476050 980__ $$aEBOOK 001476050 982__ $$aEbook 001476050 983__ $$aOnline 001476050 994__ $$a92$$bISE