资讯
Can we ever really trust algorithms to make decisions for us? Previous research has proved these programs can reinforce society’s harmful biases, but the problems go beyond that. A new study ...
Neural networks are demonstrating profound leaps in their abilities when they're tasked with open exploration instead of a narrowly focused goal.
For example, an algorithm called CB (color blind) imposes the restriction that any discriminating variables, such as race or gender, should not be used in predicting the outcomes.
Under the right circumstances, algorithms can be more transparent than human decision-making, and even can be used to develop a more equitable society.
It doesn’t take much to make machine-learning algorithms go awry The rise of large-language models could make the problem worse ...
In another example, a commonly used algorithm for predicting the success of vaginal birth after a prior cesarean (VBAC) delivery predicts lower success for Black and Hispanic mothers relative to ...
Algorithms Can Make Good Co-Workers The robots aren’t going to replace humans—we’ll just all work together.
A study published Thursday in Science has found that a health care risk-prediction algorithm, a major example of tools used on more than 200 million people in the U.S., demonstrated racial bias ...
一些您可能无法访问的结果已被隐去。
显示无法访问的结果