Skip to content

Conversation

@puja-trivedi
Copy link
Contributor

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @puja-trivedi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive design document detailing the proposed integration of the Claim LLM (CLLM) tool. The primary objective is to establish a robust pipeline for extracting and evaluating scientific claims from research papers using large language models, and subsequently ingesting this structured data into a knowledge graph. This foundational work aims to enhance the system's capability for automated scientific claim verification and analysis, paving the way for advanced features and broader application within the StructSense framework.

Highlights

  • New Design Document: A new design document, cllm_integration.md, has been added to outline the integration of the Claim LLM (CLLM) tool.
  • CLLM Overview and Workflow: The document provides an overview of CLLM, a command-line tool for extracting, evaluating, and comparing scientific claims using Large Language Models, detailing its 4-stage workflow.
  • Minimum Viable Product (MVP): An MVP is defined for ingesting assertions and evidence from scientific papers into a knowledge graph using CLLM, including steps for input, processing, output formatting, and ingestion.
  • Architectural Diagram: A Mermaid flowchart visually represents the proposed CLLM CLI integration architecture, illustrating the data flow from input to knowledge graph ingestion.
  • Future Enhancements: The document outlines future plans, such as integrating peer review evaluations, developing a user interface, and incorporating CLLM into the StructSense Agentic Framework.
  • Example Outputs: Example CLLM output metrics, including extraction and evaluation details, costs, and processing times for several bioRxiv papers, are provided.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • doc/design/cllm_integration.md
    • Added a new design document outlining the integration proposal for the Claim LLM (CLLM) tool, including its overview, MVP, future enhancements, architectural diagram, and example outputs.
Activity
  • The pull request was created by puja-trivedi.
  • No further human activity (comments, reviews, or updates) has been recorded since its creation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a design document for integrating the CLLM tool. The document is well-structured and clearly outlines the overview, MVP, future enhancements, and architecture. My review includes a couple of minor formatting suggestions to improve the readability and correctness of the markdown file, particularly in the output results table.

puja-trivedi and others added 2 commits February 6, 2026 17:07
Removed trailing space in the 'MVP' heading.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
corrected the table formatting for consistency and proper rendering.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@puja-trivedi puja-trivedi requested a review from nadernik February 9, 2026 18:57
updated google drive links for output
3. **Peer Review Evaluation**: Group claims based on peer review commentary
4. **Compare Results**: Compare LLM and peer review evaluations

## MVP
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@puja-trivedi How does this align with StructSense? The design document doesn't make clear on how it will be integrated with the StructSense. Also, what is it that CLLM does differently, is it the prompt based? If yes, can't we just incorporate those prompts? If not, can we use CLLM as tool in StructSense or the core components as tools?

Please remember, we have the extraction pipeline in theStructSense already.

4. **Knowledge Graph Ingestion**: Ingest the processed outputs into the knowledge graph for downstream applications.
### Todo:
1. Add PDF Parsing Layer to extract text from PDFs and feed it into the CLLM workflow.
2. Implement output processing to filter and format CLLM outputs for knowledge graph ingestion.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@puja-trivedi It looks like you're thinking from the independent application, is it so?


## Future Enhancements
- **Peer Review Integration**: Incorporate the peer review evaluation by working with domain-experts to annotate claims and evidence. This will allow us to compare LLM evaluations with human expert assessments.
- **UI Development**: Develop a user-friendly interface for interacting with the CLLM tool. This could include features for uploading papers, viewing extracted claims, evaluating claims, and visualizing the comparison between LLM and peer review evaluations.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UI development should be the part of the BrainKB UI as it integrates different application. I believe this design document here should only focus on how it will be integrated with StructSense.

Copy link
Collaborator

@tekrajchhetri tekrajchhetri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check the comments below.

Copy link
Collaborator

@tekrajchhetri tekrajchhetri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check the comments below.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants