2019.10 News Weekly 4
-
Tesla’s ‘Full Self-Driving’ feature may get early-access release by the end of 2019
-
这么多年我看到了成千个研究生,很多都非常优秀,可是十年以后他们得了博士学位再看,有的人非常成功,有的人非常不成功,并不是因为这些人的本事差了这么多,得到过博士学位的人通常本事都还不坏的;也不是因为有的人努力,有的人不努力。
主要是有人走对了方向,要是走到一个强弩之末的方向上,那就没有办法的,而且越走越不容易走出来,要换一个方向不容易,继续做那就走成了最不幸的一个人。这点我希望在座的每一个研究生都理解到这几句话的意思。
-
Lage, Isaac, et al. “An evaluation of the human-interpretability of explanation.” arXiv preprint arXiv:1902.00006 (2019).
- Motivation: The evaluation of interpretable machine learning systems is challenging, as explanation is almost always a means toward some downstream task.
- Research question: how increasing different types of complexity affects the usability of explanations for three tasks.
- Explanation Size
- Creating New Types of Cognitive Chunks
- Repeated Terms in an Explanation
- Methods:
- Tasks
- simulating the system’s response
- verification of a suggested response
- counterfactual reasoning because they are general enough to be relevant across a variety of domains
- Metrics
- response time
- accuracy
- subjective satisfaction
- Tasks
- Results
- The magnitude of performance effect varies by the type of complexity.
- Surprisingly, implicit cognitive chunks were faster for people to process than explicit cognitive chunks. Participants
- The type of question significantly impacted response time.