资讯

For example, an algorithm such as COMPAS might purport to predict the chance of future criminal activity, but it can only rely on measurable proxies, such as being arrested.
Can we ever really trust algorithms to make decisions for us? Previous research has proved these programs can reinforce society’s harmful biases, but the problems go beyond that. A new study ...
Neural networks are demonstrating profound leaps in their abilities when they're tasked with open exploration instead of a narrowly focused goal.
For example, an algorithm called CB (color blind) imposes the restriction that any discriminating variables, such as race or gender, should not be used in predicting the outcomes.
Making Algorithms More Like Kids: What Can Four-Year-Olds Do That AI Can’t? Thomas Hornigold Jun 26, 2019 Instead of trying to produce a programme to simulate the adult mind, why not rather try to ...
In another example, a commonly used algorithm for predicting the success of vaginal birth after a prior cesarean (VBAC) delivery predicts lower success for Black and Hispanic mothers relative to ...
Under the right circumstances, algorithms can be more transparent than human decision-making, and even can be used to develop a more equitable society.
I spend all day making decisions, and they’re not always good ones. Could an algorithm do a better job of deciding what’s best for me?
For example, an algorithm called CB (color blind) imposes the restriction that any discriminating variables, such as race or gender, should not be used in predicting the outcomes.
For example, learning to predict a specific threat and to elicit an appropriate response to predictors (as in Pavlovian fear conditioning) can be abstractly described by a single decision-making ...
It doesn’t take much to make machine-learning algorithms go awry The rise of large-language models could make the problem worse ...