[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Schedule

9:00-9:45 Opening Remarks & Shared Task Overview
FEVER Organizers
9:45-10:30 DSPy: A Framework for Programming - Not Prompting - Foundation Models.
Krista Opsahl-Ong, Stanford University
10:30-11:00 Coffee break
11:00-12:00 Poster Session [show/hide details]
Multi-hop Evidence Pursuit Meets the Web: Team Papelo at FEVER 2024
Christopher Malon
Retrieving Semantics for Fact-Checking: A Comparative Approach using CQ (Claim to Question) & AQ (Answer to Question)
Nicolò Urbani, Sandip Modha and Gabriella Pasi
RAG-Fusion Based Information Retrieval for Fact-Checking
Yuki Momii, Tetsuya Takiguchi and Yasuo Ariki
UHH at AVeriTeC: RAG for Fact-Checking with Real-World Claims
Ozge Sevgili, Irina Nikishina, Seid MuhieYimam, Martin Semmannand and Chris Biemann
Improving Evidence Retrieval on Claim Verification Pipeline through Question Enrichment
Svetlana Churina, Anab Maulana Barik and Saisamarth Rajesh Phaye
Dunamu-ml’s Submissions on AVERITEC Shared Task
Heesoo Park, Dongjun Lee, Jaehyuk Kim, ChoongWon Park and Changhwa Park
FZI-WIM at AVeriTeC Shared Task: Real-World Fact-Checking with Question Answering
Jin Liu, Steffen Thoma and Achim Rettinger
Zero-Shot Learning and Key Points Are All You Need for Automated Fact-Checking
Mohammad Ghiasvand Mohammadkhani, Ali Ghiasvand Mohammadkhani and Hamid Beigy
Evidence-backed Fact Checking using RAG and Few-Shot In-Context Learning with LLMs
Ronit Singal, Pransh Patwa, Parth Patwa, Aman Chadha and Amitava Das
SK_DU Team: Cross-Encoder based Evidence Retrieval and Question Generation with Improved Prompt for the AVeriTeC Shared Task
Shrikant Malviya and Stamos Katsigiannis
InFact: A Strong Baseline for Automated Fact-Checking
Mark Rothermel, Tobias Braun, Marcus Rohrbach and Anna Rohrbach
Exploring Retrieval Augmented Generation For Real-world Claim Verification
Adjali Omar
GProofT: A Multi-dimension Multi-round Fact Checking Framework Based on Claim Fact Extraction
Jiayu Liu, Junhao Tang, Hanwen Wang, Baixuan Xu, Haochen Shi, Weiqi Wang and Yangqiu Song
HerO at AVeriTeC: The Herd of Open Large Language Models for Verifying Real-World Claims
Yejun Yoon, Jaeyoon Jung, Seunghyun Yoon and Kunwoo Park
AIC CTU system at AVeriTeC: Re-framing automated fact-checking as a simple RAG task
Herbert Ullrich, Tomásˇ Mlynárˇ and Jan Drchal
Enhancing Fact Verification with Causal Knowledge Graphs and Transformer-Based Retrieval for Deductive Reasoning
Fiona Anting Tan, Jay Desai and Srinivasan H. Sengamedu
Numerical Claim Detection in Finance: A New Financial Dataset, Weak- Supervision Model, and Market Analysis
Agam Shah, Arnav Hiray, Pratvi Shah, Arkaprabha Banerjee, Anushka Singh, Dheeraj Deepak Eidnani, Sahasra Chava, Bhaskar Chaudhury and Sudheer Chava
Streamlining Conformal Information Retrieval via Score Refinement
Yotam Intrator, Regev Cohen, Ori Kelner, Roman Goldenberg, Ehud Rivlin and Daniel Freedman
Improving Explainable Fact-Checking via Sentence-Level Factual Reasoning
Francielle Vargas, Isadora Salles, Diego Alves, Ameeta Agrawal, Thiago A. S. Pardo and Fabrício Benevenuto
Fast Evidence Extraction for Grounded Language Model Outputs
Pranav Mani, Davis Liang and Zachary Chase Lipton
Question-Based Retrieval using Atomic Units for Enterprise RAG
Vatsal Raina and Mark Gales
AMREx: AMR for Explainable Fact Verification
Chathuri Jayaweera, Sangpil Youm and Bonnie J Dorr
Claim Check-Worthiness Detection: How Well do LLMs Grasp Annotation Guidelines?
Laura Majer and Jan Sˇnajder
Contrastive Learning to Improve Retrieval for Real-World Fact Checking
Aniruddh Sriram, Fangyuan Xu, Eunsol Choi and Greg Durrett
RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models
Mohammed Abdul Khaliq, Paul Yu-Chun Chang, Mingyang Ma, Bernhard Pflugfelder and Filip Miletic ́
FactGenius: Combining Zero-Shot Prompting and Fuzzy Relation Mining to Improve Fact Verification with Knowledge Graphs
Sushant Gautam and Roxana Pop
Fact or Fiction? Improving Fact Verification with Knowledge Graphs through Simplified Subgraph Retrievals
Tobias Aanderaa Opsahl
ProTrix: Building Models for Planning and Reasoning over Tables with Sentence Context
Zirui Wu and Yansong Feng
SparseCL: Sparse Contrastive Learning for Contradiction Retrieval
Haike Xu, Zongyu Lin, Yizhou Sun, Kai-Wei Chang and Piotr Indyk
Learning to Verify Summary Facts with Fine-Grained LLM Feedback
Jihwan Oh, Jeonghwan Choi, Nicole Hee-Yeon Kim, Taewon Yun, Ryan Donghan Kwon and Hwanjun Song
DAHL: Domain-specific Automated Hallucination Evaluation of Long-Form Text through a Benchmark Dataset in Biomedicine
Jean Seo, Jongwon Lim, Dongjun Jang and Hyopil Shin
Detecting Misleading News Representations on Social Media Posts
Satoshi Tohda, Naoki Yoshinaga, Masashi Toyoda, Sho Cho and Ryota Kitabayashi
Evidence Retrieval for Fact Verification using Multi-stage Reranking
Shrikant Malviya and Stamos Katsigiannis
Generating Media Background Checks for Automated Source Critical Reasoning
Michael Schlichtkrull
DYNAMICQA: Tracing Internal Knowledge Conflicts in Language Models
Sara Vera Marjanovic ́, Haeun Yu, Pepa Atanasova, Maria Maistro, Christina Lio- ma and Isabelle Augenstein
Zero-Shot Fact Verification via Natural Logic and Large Language Models
Marek Strong, Rami Aly and Andreas Vlachos
Do We Need Language-Specific Fact-Checking Models? The Case of Chinese
Caiqi Zhang, Zhijiang Guo and Andreas Vlachos
12:00-12:35 Contributed Shared Task Talks [show/hide details]
InFact: A Strong Baseline for Automated Fact-Checking
Mark Rothermel, Tobias Braun, Marcus Rohrbach and Anna Rohrbach
HerO at AVeriTeC: The Herd of Open Large Language Models for Verifying Real-World Claims
Yejun Yoon, Jaeyoon Jung, Seunghyun Yoon and Kunwoo Park
AIC CTU system at AVeriTeC: Re-framing automated fact-checking as a simple RAG task
Herbert Ullrich, Tomásˇ Mlynárˇ and Jan Drchal
Dunamu-ml’s Submissions on AVERITEC Shared Task
Heesoo Park, Dongjun Lee, Jaehyuk Kim, ChoongWon Park and Changhwa Park
Multi-hop Evidence Pursuit Meets the Web: Team Papelo at FEVER 2024
Christopher Malon
12:35-14:00 Lunch Break
14:00-14:45 Truth, Falsehood, AI, and the World: Multilingual Insights into the Production and Perception of False Information
Rada Mihalcea, University of Michigan
14:45-15:30 Strategies for accessing data for fact-checking in Africa; using fact-checks to forecast potential consequences of misinformation
Peter Cunliffe-Jones, University of Westminster
15:30-16:00 Coffee Break
16:00-16:30 Contributed Shared Task Talks [show/hide details]
Enhancing Fact Verification with Causal Knowledge Graphs and Transformer-Based Retrieval for Deductive Reasoning
Fiona Anting Tan, Jay Desai and Srinivasan H. Sengamedu
Contrastive Learning to Improve Retrieval for Real-World Fact Checking
Aniruddh Sriram, Fangyuan Xu, Eunsol Choi and Greg Durrett
FactGenius: Combining Zero-Shot Prompting and Fuzzy Relation Mining to Improve Fact Verification with Knowledge Graphs
Sushant Gautam and Roxana Pop
16:30-17:15 Visual Fact Checking, ClaimReview, & Potential Future Directions
Chris Bregler, Google DeepMind
17:15-17:30 Closing Remarks
FEVER Organizers

Invited Talks

DSPy: A Framework for Programming - Not Prompting - Foundation Models.
Krista Opsahl-Ong

Language Models (LMs), trained as generalist systems have made it much easier to prototype impressive AI demos, but turning LMs into reliable AI systems remains challenging, as their monolithic nature makes them hard to control, debug, and improve. To tackle this, the AI community is increasingly building Compound AI Systems, i.e. modular programs that uses LMs as specialized components, but most such systems are highly brittle in practice: they couple task decomposition with choices about prompting, finetuning, inference-time strategies, and even individual LMs. Instead of prompting LMs with string templates, what if we could build more reliable and scalable AI systems by programmatically composing natural-language-typed modules that can learn from data? This is the key idea behind the DSPy framework, which introduces natural-language programming abstractions (DSPy Signatures and Modules) and a set of new ML algorithms (DSPy Optimizers) that compile high-level programs down into optimized prompting and finetuning strategies for Compound AI Systems. In this talk, we will provide an overview of what DSPy is and how the DSPy programming model works, as well as a deeper dive into the latest automatic prompt optimization algorithm that powers DSPy, MIPROv2.



Truth, Falsehood, AI, and the World: Multilingual Insights into the Production and Perception of False Information
Rada Mihalcea

False information is shared everyday and everywhere and in countless forms. Traditionally, such information was produced and disseminated by people, with various intentions ranging from benign exaggerations to deliberate manipulation. Over the past two decades, research in AI has shown that algorithms can often outperform humans in identifying falsehood, providing a valuable tool in areas such as security, media, and consumer protection. However, the recent rise of large language models adds complexity to this landscape: these models have now become remarkably proficient at generating false information, often convincingly so. The challenge is further amplified by the large number of languages and cultural nuances worldwide, making it increasingly difficult to detect and manage false content. So, where do we go from here?



Strategies for accessing data for fact-checking in Africa; using fact-checks to forecast potential consequences of misinformation
Peter Cunliffe-Jones

Fact-checkers, researchers and policymakers face challenges accessing accurate information online on key topics in many African contexts. To overcome these challenges, fact-checkers develop short-term tactics and see a need for longer-term strategies. Using a model developed from examination of fact-checks thus produced, I argue it is possible to better focus AI and research efforts on the potential of specific false claims to cause, or not to cause, or contribute to specific substantive negative consequences, or harms, for individuals and society, and discuss the implications.



Visual Fact Checking, ClaimReview, & Potential Future Directions
Chris Bregler

In this talk, I will provide an overview of trends we've observed in online misinformation by analyzing years of online fact-checking data from the International Fact-Checking Network's (IFCN) ClaimReview datasets. The focus will be on recent trends in visual misinformation, covering new genAI based misinformation, "cheap fakes," and misleading context manipulations. I'll also share some surprising statistics that challenge common beliefs about the most prevalent types of misinformation. Beyond identifying trends, I'll discuss mitigation strategies, including how we can improve information literacy tools, the opportunities and limitations of using AI to detect manipulated content, how various provenance methods together with AI can help mitigate out-of-context manipulations, and new opportunities for multimodal LLMs in the space of visual fact-checking.



Workshop Organising Committee

Mubashara Akhtar

King's College London

Rami Aly

University of Cambridge

Rui Cao

University of Cambridge

Yulong Chen

University of Cambridge

Christos Christodoulopoulos

Amazon

Oana Cocarascu

King's College London

Zhenyun Deng

University of Cambridge

Zhijiang Guo

Huawei

Arpit Mittal

Meta

Michael Schlichtkrull

Queen Mary University of London

James Thorne

KAIST AI

Chenxi Whitehouse

Meta

Andreas Vlachos

University of Cambridge