Skip to content

Commit 5647d6f

Browse files
committed
first working draft
1 parent fb8a39d commit 5647d6f

File tree

4 files changed

+3297
-0
lines changed

4 files changed

+3297
-0
lines changed

README.md

+14
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,16 @@
11
# LiteratureReviewBot
22
Create a bot that scans pubmed literature and github for publications related to a topic and summarizes them in a table.
3+
4+
## Dependencies
5+
6+
You'll need to have an Ollama installed on your machine. You can download it by going [here](https://ollama.com/download). Once you have it installed you can download the LLama3.1 model by running the following command:
7+
8+
```bash
9+
ollama pull llama3.1
10+
```
11+
12+
Create a new virtual environment and install the dependencies by running the following commands:
13+
14+
```bash
15+
pip install -eU .
16+
```

app.py

+53
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
#!/usr/bin/env python3
2+
from langchain_core.prompts import ChatPromptTemplate
3+
from langchain_ollama.llms import OllamaLLM
4+
import streamlit as st
5+
6+
from literaturereviewbot.prompts.literature_review import (
7+
generate_prompt as lit_review_prompt,
8+
)
9+
10+
st.title("Literature Review Bot")
11+
12+
# Styling
13+
st.markdown(
14+
"""
15+
<style>
16+
.main {
17+
background-color: #00000;
18+
}
19+
</style>
20+
""",
21+
unsafe_allow_html=True,
22+
)
23+
24+
# Sidebar for additional options or information
25+
with st.sidebar:
26+
st.info("This app uses the Llama 3.1 model to answer your questions.")
27+
28+
### Possible prompt for literature review bot once we have the data
29+
# documents = docs_to_be_summarized
30+
# # Construct the conversation
31+
32+
33+
###
34+
35+
template = """{prompt}
36+
Answer:
37+
"""
38+
prompt = ChatPromptTemplate.from_template(template)
39+
model = OllamaLLM(model="llama3.1")
40+
chain = prompt | model
41+
42+
# Main content
43+
col1, col2 = st.columns(2)
44+
with col1:
45+
question = st.text_input("Enter your question here")
46+
if question:
47+
with st.spinner("Thinking..."):
48+
prompt = lit_review_prompt(question)
49+
answer = chain.invoke(prompt)
50+
st.success("Done!")
51+
st.markdown(f"**Answer:** {answer}")
52+
else:
53+
st.warning("Please enter a question to get an answer.")

0 commit comments

Comments
 (0)