

Xue Zhirong, Designer, Interaction Design, Human-Computer Interaction, Artificial Intelligence, Official Website, Blog, Creator, Author, Engineer, Paper, Product Design, Research, AI, HCI, Design, Learning, Knowledge Base, xuezhirong, UX, Design, Research, AI, HCI, Designer, Engineer, Author, Blog, Papers, Product Design, Study, Learning, User Experience
- Facebook账号 (44)
- To assess trust more accurately, the researchers used behavioral trust (WoA), a measure that takes into account the difference between the user's predictions and the AI's recommendations, and is independent of the model's accuracy. By comparing WoA under different conditions, researchers can analyze the relationship between trust and performance. (42)
- Q1: How does feedback affect users' trust in AI? (36)
- Robots and digital humans (41)
- solution (104)
- 社交媒体账号交易 (13)
- User experience (34)
- 账号买卖平台 (33)
- Facebook广告 (43)
- 社交媒体营销 (35)
- A1: According to research, feedback (e.g. result output) is a key factor influencing user trust. It is the most significant and reliable way to increase user trust in AI behavior. (13)
- Facebook购物 (20)
- 社交媒体平台 (19)
- Xue Zhirong's knowledge base (12)
- Facebook Marketplace (30)
- Q2: Does explainability necessarily enhance users' trust in AI? (24)
- Facebook (14)
- Facebook账号交易平台 (15)
- The content is made up of: (40)
- 账号交易市场 (15)
- 账号交易平台 (35)
- MIT Licensed | Copyright © 2024-present Zhirong Xue's knowledge base (21)
- Q3: How does result feedback and model interpretability affect user task performance? (18)
- interview (24)
- About me (24)
Interactive
Problem findingBased on large language model generation, there may be a risk of errors.
Problem findingBased on large language model generation, there may be a risk of errors.
Problem findingBased on large language model generation, there may be a risk of errors.
Problem findingBased on large language model generation, there may be a risk of errors.
Problem findingBased on large language model generation, there may be a risk of errors.
Problem findingBased on large language model generation, there may be a risk of errors.
Problem findingBased on large language model generation, there may be a risk of errors.