[SystemSafety] Breaking Down Problems with A.I. Safety
Chuck_Petras at selinc.com
Chuck_Petras at selinc.com
Fri Jun 24 23:16:45 CEST 2016
You may find this interesting...
"Now comes the next OpenAI initiative: a breakdown of 'concrete problems
in A.I. safety.' OpenAI researchers, in conjunction with colleagues from
Google Brain, have issued a paper that delineates how machine-learning
systems can potentially go haywire. ... 'Many of the problems are not new,
but the paper explores them in the context of cutting-edge systems,'
explains a brief intro on OpenAI?s Website. 'We hope they?ll inspire more
people to work on A.I. safety research, whether at OpenAI or elsewhere.'"
-
http://insights.dice.com/2016/06/22/breaking-down-problems-with-ai-safety/
Concrete Problems in AI Safety
https://arxiv.org/pdf/1606.06565v1.pdf
And now there is this.
IBM WATSON AI XPRIZE [US$ 5MM] - INCENTIVIZING INNOVATIVE AI APPROACHES &
COLLABORATION
http://ai.xprize.org/
Chuck Petras, PE**
Schweitzer Engineering Laboratories, Inc
Pullman, WA 99163 USA
http://www.selinc.com
Tel: +1.509.332.1890
SEL Synchrophasors - A New View of the Power System <
http://synchrophasor.selinc.com>
Making Electric Power Safer, More Reliable, and More Economical (R)
** Registered in Oregon.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160624/92011e71/attachment.html>
More information about the systemsafety
mailing list