Linked e-resources

Details

Foreword; References; Acknowledgements; Contents; 1 Introduction to the Technological Singularity; 1.1 Why the "Singularity" Is Important; 1.2 Superintelligence, Superpowers; 1.3 Danger, Danger!; 1.4 Uncertainties and Safety; References; Risks of, and Responses to, the Journey to the Singularity; 2 Risks of the Journey to the Singularity; 2.1 Introduction; 2.2 Catastrophic AGI Risk; 2.2.1 Most Tasks Will Be Automated; 2.2.2 AGIs Might Harm Humans; 2.2.3 AGIs May Become Powerful Quickly; 2.2.3.1 Hardware Overhang; 2.2.3.2 Speed Explosion; 2.2.3.3 Intelligence Explosion; References

3 Responses to the Journey to the Singularity3.1 Introduction; 3.2 Post-Superintelligence Responses; 3.3 Societal Proposals; 3.3.1 Do Nothing; 3.3.1.1 AI Is Too Distant to Be Worth Our Attention; 3.3.1.2 Little Risk, no Action Needed; 3.3.1.3 Let Them Kill Us; 3.3.1.4 "Do Nothing" Proposals-Our View; 3.3.2 Integrate with Society; 3.3.2.1 Legal and Economic Controls; 3.3.2.2 Foster Positive Values; 3.3.2.3 "Integrate with Society" Proposals-Our View; 3.3.3 Regulate Research; 3.3.3.1 Review Boards; 3.3.3.2 Encourage Research into Safe AGI; 3.3.3.3 Differential Technological Progress

3.3.3.4 International Mass Surveillance3.3.3.5 "Regulate Research" Proposals-Our View; 3.3.4 Enhance Human Capabilities; 3.3.4.1 Would We Remain Human?; 3.3.4.2 Would Evolutionary Pressures Change Us?; 3.3.4.3 Would Uploading Help?; 3.3.4.4 "Enhance Human Capabilities" Proposals-Our View; 3.3.5 Relinquish Technology; 3.3.5.1 Outlaw AGI; 3.3.5.2 Restrict Hardware; 3.3.5.3 "Relinquish Technology" Proposals-Our View; 3.4 External AGI Constraints; 3.4.1 AGI Confinement; 3.4.1.1 Safe Questions; 3.4.1.2 Virtual Worlds; 3.4.1.3 Resetting the AGI; 3.4.1.4 Checks and Balances

3.4.1.5 "AI Confinement" Proposals-Our View3.4.2 AGI Enforcement; 3.4.2.1 "AGI Enforcement" Proposals-Our View; 3.5 Internal Constraints; 3.5.1 Oracle AI; 3.5.1.1 Oracles Are Likely to Be Released; 3.5.1.2 Oracles Will Become Authorities; 3.5.1.3 "Oracle AI" Proposals-Our View; 3.5.2 Top-Down Safe AGI; 3.5.2.1 Three Laws; 3.5.2.2 Categorical Imperative; 3.5.2.3 Principle of Voluntary Joyous Growth; 3.5.2.4 Utilitarianism; 3.5.2.5 Value Learning; 3.5.2.6 Approval-Directed Agents; 3.5.2.7 "Top-Down Safe AGI" Proposals-Our View; 3.5.3 Bottom-up and Hybrid Safe AGI

3.5.3.1 Evolutionary Invariants3.5.3.2 Evolved Morality; 3.5.3.3 Reinforcement Learning; 3.5.3.4 Human-like AGI; 3.5.3.5 "Bottom-up and Hybrid Safe AGI" Proposals-Our View; 3.5.4 AGI Nanny; 3.5.4.1 "AGI Nanny" Proposals-Our View; 3.5.5 Motivational Scaffolding; 3.5.6 Formal Verification; 3.5.6.1 "Formal Verification" Proposals-Our View; 3.5.7 Motivational Weaknesses; 3.5.7.1 High Discount Rates; 3.5.7.2 Easily Satiable Goals; 3.5.7.3 Calculated Indifference; 3.5.7.4 Programmed Restrictions; 3.5.7.5 Legal Machine Language; 3.5.7.6 "Motivational Weaknesses" Proposals-Our View; 3.6 Conclusion

Browse Subjects

Show more subjects...

Statistics

from
to
Export