| | | | |
AI Feature | Who (could be harmed) | How (the harm could happen) | Likelihood | Severity |
Email summarizer (H1) | AI company | AI company’s reputation may be harmed due to inaccurate summary | 3, 3, 3 | 2, 2, 3 |
| Author | The AI summary might lose context and information, critical context. It could cut out some crucial part of the email. | 4, 3, 3 | 3, 3, 3 |
| Owner of the email | Rewrite could leak data from other users’ email. | 3, 1, 2 | 4, 4, 2 |
| People whose names / personal information is lost in the summarization | Potential data loss. The email could be very long, and the rewrite could miss some context (e.g., full name). The rewrite can cause confusion to the receivers. | 3, 3, 3 | 2, 3, 3 |
| Product team, company | Prompt injection attack. | 3, 1, 3 | 2, 1, 3 |
| Receiver | Omission of vital information and context. The feature can miss some very important information. | 3, 3, 3 | 3, 2, 3 |
| Receiver | The initial rewrite is a bit open ended, which might cause anxiety. | 2, 3, 2 | 1, 3, 2 |
| Recipients | If users can control the temperature of the LLM, they can generate more inaccurate emails. | 1, 3, 1 | 4, 2, 1 |
| Recipients due to loss of productivity and insight into their orgs metrics | The email may miss some metrics from the email. | 3, 3, 3 | 2, 2, 3 |
| Scam victims | Scam emails are very common. Mass emailing a lot of people with easy automation can cause harm to victims. | 2, 3, 3 | 4, 3, 3 |
| Sender | In a professional setting, if the feature is not sufficient to capture the keywords, the summary can sound unprofessional. It can harm the reputation of the sender. | 2, 3, 3 | 2, 2, 2 |
| Sender | The summary is inaccurate, e.g., having the wrong time, names in the rewrite. It can harm the sender’s reputation. | 3, 3, 3 | 2, 3, 3 |
| Sender, Recipient | The feature introduces incorrect information that is not in the original email. | 3, 3, 3 | 3, 2, 2 |
| Sender, recipient | Quality of the communication. If the tone of the rewrite is different from the original email, it can cause harm to both sender and recipient. Miscommunication. | 3, 2, 3 | 2, 2, 3 |
| Sender, recipient | The delicacy in the original email may be removed in the shorter email. The tone, emotion, and human elements may be changed. | 4, 3, 3 | 3, 2, 3 |
| Sender, recipient | The summary can have a different voice / style from the email (more directly, less soft). It can be perceived as negative. | 4, 3, 4 | 3, 2, 4 |
| Sender, recipient, company relationships | AI can be culturally blind. It might not follow some typical email norm in the rewrite. | 3, 3, 3 | 2, 2, 3 |
| Senders, receivers | The AI feature puts words on other people’s behalf. It can have different viewpoints from the email. It can have misrepresented views. | 2, 3, 4 | 3, 3, 4 |
| Senders, recipient | The summary can lose some tone, voice of the email, becoming less personal. | 3, 3, 4 | 2, 3, 3 |
| Stakeholders of the decision | If the communication is about decision making and the rewrite contains error, it can harm the stakeholders of the decision. | 1, 4, 3 | 1, 4, 3 |
| Team building the feature | Damage to reputation, credibility to have a feature fail this way. | 3, 2, 3 | 3, 1, 2 |
Toxicity classifier (H2) | Advertisers | If the AI feature fails, ads might be present next to toxic content, losing revenues. | 3, 3, 3 | 1, 2, 3 |
| Bullies | Teachers use it to identify students being bullied. The AI may make the teachers be biased against the bullies. | 2, 3, 2 | 3, 3, 2 |
| Company of the AI product | The company may lose users due to the presence of more moderation. | 3, 2, 1 | 4, 2, 1 |
| Customer | Customer service agents use it to identify abusive customers. Customers may lose the opportunity to get help from customer service agents. | 3, 3, 3 | 3, 2, 4 |
| Customer service agents | If customer service uses it to screen people, false positives: exposing customer service agents to toxic customers. | 3, 1, 3 | 3, 1, 3 |
| Employees of the company | HR departments use it to screen job applicants for toxic behavior. Employees may be unfairly rejected for jobs due to AI-generated toxicity labels. | 1, 3, 3 | 1, 3, 4 |
| End user (children & parents) | Mislabeled outputs will not be blocked by parental controls. | 3, 3, 3 | 3, 4, 4 |
| Government | Military uses it to identify potential insurgents. Governments may use AI to target and surveil innocent people, leading to human rights violations. | 3, 2, 3 | 4, 3, 3 |
| Hate group targets | Hate groups use it to identify potential recruits. Hate groups may be able to recruit more people due to the AI tool. | 3, 2, 3 | 3, 3, 3 |
| Job applicants | Reddit has a karma score. Similarly, if a social media uses this feature to prioritize non-toxic content, misclassification can lose job opportunities for social media users (e.g, tweet not seen by companies). | 3, 4, 1 | 3, 3, 1 |
| Online moderators | Online moderators may lose their jobs because the AI tool can already distinguish toxic vs non-toxic so there is no need for a moderator. | 2, 1, 3 | 3, 1, 4 |
| Online moderators | Online moderators may miss some toxic content and post it on social media. | 3, 3, 3 | 3, 4, 3 |
| People being targeted by law enforcement | Law enforcement uses it to identify potential terrorists. Potential terrorists ([different name]) may be unfairly targeted by law enforcement due to AI-generated misclassifications. | 3, 1, 3 | 3, 1, 4 |
| People who are not members of hate groups | Hate groups use it to identify potential recruits. People who are not members of hate groups may be affected by increased hate group activity. | 3, 3, 4 | 3, 3, 4 |
| Sender, Receivers | False positive. Flagging non-toxic content as toxic, causing harm to legitimate receivers of the information (filtered). Sender cannot post due to the filter. | 3, 3, 4 | 3, 3, 4 |
| Social media users | Social media users may be exposed to toxic content that was mislabeled by the AI as non-toxic. | 3, 3, 3 | 3, 4, 3 |
| Social media users | Social media users may feel frustrated or anxious about the accuracy of the AI tool. | 3, 3, 3 | 2, 4, 2 |
| Social media users | Social media users who do not use this AI might be exposed to some distorted message. | 2, 1, 3 | 3, 1, 3 |
| Social media users | Unfairly flagged as toxic (false positive trigger). | 3, 3, 3 | 3, 2, 2 |
| Student | Teachers use it to help students identify toxic language. Students may feel frustrated or anxious about the accuracy of the AI tool. | 2, 1, 3 | 2, 1, 4 |
| Victims | Scammers use it to identify potential victims. Victims of scams may lose money or personal information. | 2, 2, 1 | 4, 3, 1 |
Article Sumamrizer (H3) | Anyone reading the document | If the document is sent to anyone else, the summary may not well-represent the document. The readers of the summary might mis-understand the original document. | 4, 3, 3 | 3, 2, 3 |
| Business | Write marketing content, memo. Bad summary can lead to miscommunication, causing financial loss. | 4, 3, 3 | 3, 3, 4 |
| Company publishing the content (news paper) | If the summary is wrong, people would lose confidence in the company. | 3, 4, 4 | 2, 4, 3 |
| Economic loss | The summary may contain misinformation about some critical article, causing economic loss. | 3, 3, 1 | 3, 3, 1 |
| Employees | Employees can use it to summarize documents in a workplace. Wrong summary can harm the stakeholders. | 3, 3, 3 | 4, 3, 3 |
| End user | Change the meaning of the article. | 3, 4, 4 | 3, 4, 4 |
| Journalists / writers | If the summary is wrong, journalists’ reputation might be harmed. | 1, 4, 3 | 1, 3, 4 |
| Kids | Kids use it cheat in school for assignments. | 3, 2, 3 | 2, 2, 3 |
| Readers | AI fails to pick up key information (nuances) in the article => the readers miss the points from the article, miscommunication. | 3, 3, 3 | 2, 2, 2 |
| Readers, content providers, new | If there are two facts in the article that are true independently, then it’s possible that the summary combines them where the statement is no longer true. | 3, 2, 3 | 2, 2, 3 |
| Readers, society | A lot of content is missed, making readers less educated overtime, causing societal loss (some events being lost in history). | 3, 2, 1 | 3, 3, 1 |
| Readers, writers | It would lose a lot of context and details. | 3, 3, 3 | 2, 3, 2 |
| Social groups | Representational harm. It could stereotype some social groups in the output. | 3, 3, 3 | 3, 3, 3 |
| Students | Students can use this feature to summarize articles instead of reading for assignments. | 4, 3, 3 | 3, 3, 3 |
| Students | Students can use this tool to cheat and lose opportunities to learn. | 4, 1, 3 | 3, 1, 4 |
| Students | Students misuse this feature to cheat on school assignments. They might lose learning opportunities. | 4, 2, 3 | 3, 2, 3 |
| Users | The context can be manipulated. | 2, 4, 3 | 2, 4, 3 |
| Users | The summary can lose some information and context from the article. | 3, 4, 3 | 3, 4, 2 |
| Users | The tool makes mistakes (e.g., wrong summary). It can harm the users of the tool by missing information in the article. | 3, 3, 3 | 3, 3, 3 |
| Users | Users may lose trust if the summary is not right. Users would become more stressful as well. | 3, 3, 4 | 2, 2, 3 |
| Writers | If someone uses the tool to write, it may change people’s perception about how you write. | 3, 1, 1 | 2, 1, 2 |
Math tutor (H4) | Company of the AI product | Engineers use it to explain to non-technical stakeholders. The company may face increased legal liability due to AI-generated explanations being inaccurate or misleading. | 2, 1, 2 | 1, 1, 2 |
| Editors | Journal readers use it to get explanations of equations in inferential statistics sections. Journal editors may worry that the quality of their journal declines because the AI feature makes too many errors. Readers get frustrated over time. | 2, 1, 2 | 2, 1, 3 |
| Marginalized population | People with less resources are less likely to access this tool, losing opportunity to learn. | 4, 3, 3 | 3, 3, 3 |
| Minority social groups | If you ask the math tool about important social groups, it can refuse to answer => ignoring important questions, marginalizing some social groups. | 3, 3, 1 | 3, 3, 1 |
| Minority social groups | There are different ways to phrase things differently based on some social groups. If the user asks a question in non-profession English or non-English, it can cause alienation. | 4, 3, 3 | 3, 3, 3 |
| Non-technical stakeholder | Engineers use it to explain complex math concepts to non-technical stakeholders. Non-technical stakeholders may be misled by AI-generated explanations of complex mathematical models. | 3, 2, 3 | 3, 2, 3 |
| Parents of students | Students use it to learn math. Parents may feel frustrated or anxious about their child’s education. | 2, 2, 3 | 2, 2, 3 |
| Public | The public may be misled by AI-generated explanations of scientific concepts. | 3, 3, 3 | 3, 3, 3 |
| Rocket engineers | Wrong answers can harm the task. | 3, 3, 3 | 3, 2, 3 |
| Student | Tutors use it to help students understand math concepts. Students may lose the opportunity to learn the material in a way that is tailored to their individual needs. | 4, 2, 4 | 4, 2, 4 |
| Students | If the students use this tool to cheat, they would have economic loss (not performing well on their jobs). | 2, 1, 3 | 2, 1, 3 |
| Students | Incorrect math answers harm the user’s learning process. | 3, 3, 3 | 2, 3, 3 |
| Students | Students use it to cheat, losing opportunities to learn. | 4, 1, 3 | 3, 1, 3 |
| Students | Students use it to learn math. Students may feel like they are losing their ability to learn about math problems. | 2, 1, 2 | 2, 1, 2 |
| Students | Students use it to learn math. Students who do not use this AI product may feel like they are not getting the same quality of education as their peers. | 3, 3, 3 | 2, 3, 3 |
| Students | Wrong solution, less optimized solution, wrong information, students learn the wrong thing. | 3, 3, 3 | 3, 2, 3 |
| Students and teachers | Students can use this app to cheat. Students do not learn. | 4, 3, 3 | 3, 2, 4 |
| Students who do not use this AI product | Students use it to learn math. Students who do not use this AI product may feel left behind by their peers who do. | 3, 3, 4 | 3, 2, 4 |
| Students, society | If the answer is based on some context of the math problem, it can harm minority students. | 2, 3, 1 | 3, 3, 1 |
| Users | If you ask tax rates / vote count / interest rate / wages, the wrong answer can cause political harm and social harm. | 3, 2, 3 | 3, 2, 3 |
| Users | Not all questions are about numbers. Sometimes the app might refuse to answer hard math questions, and uses would feel upset. | 3, 3, 3 | 2, 2, 3 |