000 02346cam a2200313 i 4500
003 PIMLIB
005 20250418092406.0
008 241218s2020 nyu b 001 0 eng
010 _a2020029036
020 _a9780393868333
020 _a9780393635829
_q(hardcover)
020 _z9780393635836
_q(epub)
050 0 0 _aQ334.7
_b.C488A
_y2020
100 _aChristian, Brian,
_d1984-
_eauthor.
_963972
245 1 4 _aThe Alignment problem :
_bmachine learning and human values /
_cBrian Christian.
264 1 _aNew York, NY :
_bW.W. Norton & Company,
_c2020
300 _axii, 476 pages ;
_c25 cm.
504 _aIncludes bibliographical references (pages [401]-451) and index.
520 _a"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--
_cProvided by publisher.
650 0 _aArtificial Intelligence
_xMoral and ethical aspects.
_928335
650 0 _aArtificial Intelligence
_xSocial aspects.
_928335
650 0 _aMachine learning
_xSafety measures.
_938969
650 0 _aSoftware failures.
_9101198
650 0 _aSocial values.
_9101199
690 _950113
_a0100 สำนักอธิการบดี
906 _a7
_bcbc
_corignew
_d1
_eecip
_f20
_gy-gencatlg
942 _2lcc
_cBK
_n0
_01
999 _c1001587
_d1001587