Creating a Resume Analyser App with SambaNova Cloud

Overview

The Resume Analyzer is a Streamlit-based web app powered by the Meta-Llama-3.3-70B-Instruct model via SambaNova Cloud. It provides users with AI-driven insights to enhance and optimize their resumes efficiently and effectively.

Features

  • Resume Upload
    Supports .pdf and .docx file formats.
  • AI-Powered Analysis
    Generates detailed insights, including:
    • Content quality evaluation
    • Keyword optimization
    • Structural feedback
    • ATS (Applicant Tracking System) compatibility analysis
    • Skills gap detection
    • Actionable suggestions for improvement
    • Before/after content suggestions
    • Industry-specific advice
  • Visualizations
    • Keyword density charts
    • Skills gap graphs
  • Export Options
    • Export reports as Markdown (.md) or Text (.txt) files for sharing or documentation.
  • Custom Instructions
    • Tailor analysis based on user-defined goals.
  • Inline Resume Text Editing
  • Sample Demo
    • For rapid testing and onboarding.
  • Help Tab
    • Guides and troubleshooting resources.
  • Dark Mode UI
    • Eye-comfortable user interface.
  • Progress Indicators and Error Handling
    • Displays progress and manages failures gracefully.

Usage Steps

  1. Upload Resume
    Upload a .pdf or .docx resume file.
  2. Optional Job Description
    Paste or upload job description to align analysis with target roles.
  3. Review Details
    View word count and estimated reading time.
  4. Edit Resume
    Make inline text edits directly within the app.
  5. Generate Analysis
    Click Generate Analysis to receive insights.
  6. View and Export Report
  • View and Export: View the AI-generated report, copy content, or download the report in markdown (.md) or text (.txt) formats.

Additional Information

  • File Formats: .pdf and .docx are supported
  • Custom Instructions: Configurable from the sidebar
  • Sample Demo: Use built-in example resume
  • UI Theme: Dark mode included

Troubleshooting

Issue Possible Cause Suggested Solution
Analysis Generation Fails Resume may be too long, or internet issues * Try shortening the resume- Check your internet connection- Retry after some time
File Upload Problems Unsupported format, encoding issues, or corrupt file * Ensure the file is in .pdf or .docx format- Save file with UTF-8 encoding- Verify the file is not corrupted
Download Issues Browser-related restrictions * Try a different browser- Check browser settings to allow downloads
API Errors Incorrect API key, unstable connection, or rate limits reached * Verify SambaNova API key in streamlit_app.py- Ensure stable internet- Wait and retry if rate limits apply

Configuration

  • API Key Setup:
    Replace the SAMBANOVA_API_KEY in streamlit_app.py with your key.
  • Model Selection:
    Change MODEL_NAME in streamlit_app.py to update the model.

Installation Instructions

Install dependencies:

pip install -r requirements.txt

Run the app:

streamlit run streamlit_app.py

UI Preview

Sample Resume for Testing & Sample Output

John Doe  
123 Main St, Anytown, USA  
john.doe@example.com | (555) 123-4567 | linkedin.com/in/johndoe

Professional Summary  
Experienced software engineer with a strong background in developing scalable web applications and working with AI technologies.

Skills  
- Python, JavaScript, React, Streamlit  
- Machine Learning, NLP, Data Analysis  
- Cloud Computing, Docker, Kubernetes

Experience  
Software Engineer at TechCorp (2018 - Present)  
- Developed and maintained web applications using React and Python.  
- Led migration to cloud infrastructure, improving scalability by 30%.  
- Collaborated with cross-functional teams to deliver AI-powered features.

Education  
Bachelor of Science in Computer Science, University of Somewhere (2014 - 2018)

Certifications  
- AWS Certified Solutions Architect  
- Certified Scrum Master



Sample Output: 


Resume Analysis Report
Content Quality Assessment
The resume provides a clear and concise overview of John Doe's experience, skills, and education. However, there are areas that can be improved to make it more effective:

The professional summary is brief and lacks specific accomplishments or career goals.
The skills section is well-organized, but some skills could be grouped or prioritized for better visibility.
The experience section provides some quantitative results, but more details about the projects and technologies used would be beneficial.
The education and certifications sections are well-structured, but could be expanded to include relevant coursework or training.
Keyword Optimization Suggestions
To improve the resume's visibility in ATS and keyword searches, consider adding the following keywords:

Cloud-native technologies
Agile development methodologies
Containerization and orchestration
Natural Language Processing (NLP) techniques
Data visualization tools
Collaborative development platforms
Structure Evaluation
The resume follows a standard reverse-chronological structure, which is clear and easy to follow. However, some sections could be rearranged or expanded for better emphasis:

Consider moving the certifications section above the education section to highlight relevant industry certifications.
The skills section could be divided into technical and soft skills for better organization.
The experience section could be expanded to include more details about projects, technologies, and accomplishments.
ATS Compatibility Analysis
The resume is well-formatted and should be compatible with most ATS systems. However, to ensure optimal compatibility:

Use a standard font, such as Arial, Calibri or Helvetica.
Avoid using tables, images, or graphics that may not be parsed correctly by ATS systems.
Use keywords and phrases from the job posting to increase the resume's relevance score.
Skills Gap Analysis
Based on the provided skills and experience, some potential skills gaps or areas for improvement include:

Data engineering and architecture
Cybersecurity and compliance
DevOps and continuous integration
Leadership and management skills
Emerging technologies, such as blockchain or quantum computing
Improvement Suggestions
To improve the resume, consider the following actionable recommendations:

Use a stronger professional summary: Instead of a brief statement, use a paragraph to highlight career goals, accomplishments, and relevant experience.
Prioritize skills: Group skills into categories (e.g., programming languages, frameworks, cloud technologies) and prioritize the most relevant ones.
Add more details to the experience section: Include specific projects, technologies, and accomplishments to demonstrate expertise and impact.
Emphasize achievements: Instead of just listing job responsibilities, focus on the achievements and results achieved in each role.
Before and After Examples
Before:

Professional Summary: Experienced software engineer with a strong background in developing scalable web applications and working with AI technologies.

After:

Professional Summary: Results-driven software engineer with 5+ years of experience in developing scalable web applications and leveraging AI technologies to drive business growth. Proven track record of delivering high-impact projects and leading cross-functional teams to achieve exceptional results.

Industry-Specific Tips
For a software engineer in the tech industry:

Highlight cloud-native expertise: Emphasize experience with cloud providers, such as AWS, Azure, or Google Cloud, and cloud-native technologies, such as Kubernetes or Docker.
Emphasize AI and machine learning skills: Highlight experience with AI and machine learning frameworks, such as TensorFlow or PyTorch, and NLP techniques.
Showcase collaborative skills: Highlight experience working with cross-functional teams, agile development methodologies, and collaborative development platforms, such as GitHub or GitLab.
1 Like

Code Reference

  • Main File: streamlit_app.py
  • Dependencies: requirements.txt

requirements.txt

altair==5.5.0
annotated-types==0.7.0
anyio==4.9.0
attrs==25.3.0
bcrypt==4.3.0
beautifulsoup4==4.13.4
blinker==1.9.0
bs4==0.0.2
cachetools==5.5.2
captcha==0.7.1
certifi==2025.4.26
cffi==1.17.1
charset-normalizer==3.4.2
click==8.2.1
cryptography==45.0.3
distro==1.9.0
extra-streamlit-components==0.1.80
fpdf==1.7.2
gitdb==4.0.12
GitPython==3.1.44
h11==0.16.0
httpcore==1.0.9
httpx==0.28.1
idna==3.10
Jinja2==3.1.6
jiter==0.10.0
jsonschema==4.23.0
jsonschema-specifications==2025.4.1
lxml==5.4.0
MarkupSafe==3.0.2
narwhals==1.41.0
numpy==2.2.6
openai==1.82.0
packaging==24.2
pandas==2.2.3
pillow==11.2.1
plotly==6.1.2
protobuf==6.31.0
pyarrow==20.0.0
pycparser==2.22
pydantic==2.11.5
pydantic_core==2.33.2
pydeck==0.9.1
PyJWT==2.10.1
PyPDF2==3.0.1
python-dateutil==2.9.0.post0
python-docx==1.1.2
pytz==2025.2
PyYAML==6.0.2
referencing==0.36.2
requests==2.32.3
rpds-py==0.25.1
six==1.17.0
smmap==5.0.2
sniffio==1.3.1
soupsieve==2.7
streamlit==1.45.1
streamlit-authenticator==0.4.2
striprtf==0.0.29
tenacity==9.1.2
toml==0.10.2
tornado==6.5.1
tqdm==4.67.1
typing-inspection==0.4.1
typing_extensions==4.13.2
tzdata==2025.2
urllib3==2.4.0

streamlit_app.py

import streamlit as st
import docx
import io
import base64
import PyPDF2
from openai import OpenAI
import time
import re
import pandas as pd
from datetime import datetime
from typing import List, Dict
import json
import uuid

# New imports for RTF and HTML parsing
from striprtf.striprtf import rtf_to_text
from bs4 import BeautifulSoup

# Additional imports for new features
import plotly.express as px
import streamlit_authenticator as stauth
import threading
import os
import tempfile
import docx
from docx import Document
from fpdf import FPDF
import difflib
import hashlib
import base64
import pickle
import locale

# App configuration
st.set_page_config(
    page_title="Resume Analyzer",
    page_icon=None,
    layout="wide",
    initial_sidebar_state="expanded",
    menu_items={
        'Get Help': 'https://www.streamlit.io/community',
        'Report a bug': "https://github.com/streamlit/streamlit/issues",
        'About': "# Resume Analyzer\nAnalyze resumes with AI-powered insights and improvement suggestions."
    }
)

# Custom CSS for styling
st.markdown("""
<style>
    .stApp {
        max-width: 100%;
    }
    .main-header {
        font-size: 2.3rem;
        color: #2E7D32;
        margin-bottom: 0.8rem;
        font-weight: 600;
    }
    .sub-header {
        font-size: 1.4rem;
        color: #1B5E20;
        margin-top: 0.8rem;
        margin-bottom: 0.5rem;
        font-weight: 500;
    }
    .info-text {
        font-size: 1rem;
        color: #424242;
        line-height: 1.5;
    }
    .highlight {
        background-color: #E8F5E9;
        padding: 1rem;
        border-radius: 0.5rem;
        border-left: 0.5rem solid #2E7D32;
        margin: 1rem 0;
        overflow-wrap: break-word;
    }
    .stButton > button {
        background-color: #2E7D32;
        color: white;
        font-weight: 500;
        border: none;
        padding: 0.5rem 1rem;
        border-radius: 0.3rem;
        transition: all 0.3s ease;
    }
    .stButton > button:hover {
        background-color: #1B5E20;
        box-shadow: 0 2px 5px rgba(0,0,0,0.2);
    }
    .footer {
        margin-top: 2rem;
        text-align: center;
        color: #9E9E9E;
        padding: 1rem;
        font-size: 0.9rem;
    }
    .download-btn {
        display: inline-block;
        background-color: #2E7D32;
        color: white;
        text-decoration: none;
        padding: 0.5rem 1rem;
        border-radius: 0.3rem;
        text-align: center;
        font-weight: 500;
        margin: 0.5rem 0;
        transition: all 0.3s ease;
    }
    .download-btn:hover {
        background-color: #1B5E20;
        box-shadow: 0 2px 5px rgba(0,0,0,0.2);
    }
</style>
""", unsafe_allow_html=True)

# SambaNova API Configuration
SAMBANOVA_API_URL = "https://api.sambanova.ai/v1"
SAMBANOVA_API_KEY = "Your_key"  # Replace with your actual API key
MODEL_NAME = "Meta-Llama-3.3-70B-Instruct"

# Keep API key hardcoded as before
# Initialize OpenAI client with caching
@st.cache_resource(ttl=3600)
def get_client():
    return OpenAI(
        base_url=SAMBANOVA_API_URL,
        api_key=SAMBANOVA_API_KEY,
    )

client = get_client()

# AI prompt for resume analysis
SYSTEM_PROMPT = """
You are a professional resume analysis assistant.

Your job is to analyze the uploaded resume text and provide a detailed report including:

- Content quality assessment
- Keyword optimization suggestions
- Structure evaluation
- ATS (Applicant Tracking System) compatibility analysis
- Skills gap analysis
- Improvement suggestions with actionable recommendations
- Before and after examples for improvements
- Industry-specific tips

Format the report in clear markdown with sections and bullet points.
"""

# Function to call the LLM API with progress and error handling
def call_llama(prompt, custom_instructions=""):
    try:
        progress_placeholder = st.empty()
        status_text = st.empty()
        progress_bar = progress_placeholder.progress(0)

        status_text.text("âŗ Initializing request...")
        for i in range(1, 11):
            progress_bar.progress(i * 0.1)
            time.sleep(0.1)

        final_system_prompt = SYSTEM_PROMPT
        if custom_instructions:
            final_system_prompt += f"\n\nAdditional instructions: {custom_instructions}"

        status_text.text("🔄 Sending request to AI model...")
        for i in range(11, 31):
            progress_bar.progress(i * 0.01)
            time.sleep(0.05)

        completion = client.chat.completions.create(
            model=MODEL_NAME,
            messages=[
                {"role": "system", "content": final_system_prompt},
                {"role": "user", "content": prompt}
            ],
            stream=False,
            timeout=60
        )

        status_text.text("🧠 Processing response...")
        for i in range(31, 71):
            progress_bar.progress(i * 0.01)
            time.sleep(0.03)

        status_text.text("✨ Formatting report...")
        for i in range(71, 100):
            progress_bar.progress(i * 0.01)
            time.sleep(0.02)

        progress_bar.progress(100)
        status_text.text("✅ Report generated successfully!")
        time.sleep(0.5)

        progress_placeholder.empty()
        status_text.empty()

        return completion.choices[0].message.content

    except TimeoutError:
        progress_placeholder.empty()
        status_text.empty()
        st.error("âąī¸ Request timed out. Please try again with a shorter resume.")
        return "âąī¸ Error: Request timed out. Please try again with a shorter resume."

    except Exception as e:
        try:
            progress_placeholder.empty()
            status_text.empty()
        except:
            pass
        error_msg = str(e)
        st.error(f"❌ Error: {error_msg}")
        if "rate limit" in error_msg.lower():
            return "❌ Error: Rate limit exceeded. Please wait a moment before trying again."
        elif "timeout" in error_msg.lower():
            return "❌ Error: Request timed out. Please try with a shorter resume."
        elif "api key" in error_msg.lower():
            return "❌ Error: API authentication issue. Please check your API configuration."
        else:
            return f"❌ Error: {error_msg}"

# Function to create a download link for text content
def get_download_link(text, filename, link_text):
    try:
        b64 = base64.b64encode(text.encode('utf-8')).decode()
        href = f'<a href="data:file/txt;base64,{b64}" download="{filename}" class="download-btn">{link_text}</a>'
        return href
    except Exception as e:
        st.warning(f"Could not create download link: {str(e)}")
        return f"<p>Download failed: {str(e)}</p>"

# Function to count words in text
def count_words(text):
    return len(text.split())

# Function to estimate reading time in minutes
def estimate_reading_time(text):
    words = count_words(text)
    minutes = max(1, round(words / 225))
    return minutes

# Function to extract resume name or title from text (fallback)
def extract_title(text):
    lines = text.strip().splitlines()
    if lines:
        return lines[0].strip()
    return "Resume_Analysis"

# Function to validate and process resume text
def clean_and_normalize_text(text: str) -> str:
    """
    Clean and normalize extracted text by removing extra spaces, fixing line breaks, and other formatting issues.
    """
    # Remove multiple spaces
    text = re.sub(r'[ \t]+', ' ', text)
    # Remove multiple blank lines
    text = re.sub(r'\n\s*\n+', '\n\n', text)
    # Strip leading/trailing spaces on each line
    lines = [line.strip() for line in text.splitlines()]
    text = "\n".join(lines)
    return text

def validate_and_process_resume(resume_text, max_length):
    if not resume_text or resume_text.strip() == "":
        st.error("❌ Empty resume. Please provide some text.")
        return None

    # Clean and normalize text before further processing
    resume_text = clean_and_normalize_text(resume_text)

    word_count = count_words(resume_text)
    if word_count > max_length:
        st.warning(f"âš ī¸ Resume exceeds maximum length ({word_count} words). Only processing the first {max_length} words.")
        words = resume_text.split()
        resume_text = " ".join(words[:max_length])

    col1, col2 = st.columns(2)
    with col1:
        st.info(f"📊 Word count: {word_count}")
    with col2:
        st.info(f"âąī¸ Estimated reading time: {estimate_reading_time(resume_text)} minutes")

    return resume_text

# Function to extract text from uploaded file
def extract_text_from_file(uploaded_file):
    if uploaded_file.type == "text/plain":
        return uploaded_file.read().decode("utf-8")
    elif uploaded_file.type == "application/rtf":
        try:
            rtf_content = uploaded_file.read().decode("utf-8", errors="ignore")
            text = rtf_to_text(rtf_content)
            return text
        except Exception as e:
            st.error(f"❌ Error reading RTF file: {str(e)}")
            return None
    elif uploaded_file.type == "text/html":
        try:
            html_content = uploaded_file.read().decode("utf-8", errors="ignore")
            soup = BeautifulSoup(html_content, "html.parser")
            # Extract visible text
            text = soup.get_text(separator="\n")
            return text
        except Exception as e:
            st.error(f"❌ Error reading HTML file: {str(e)}")
            return None
    elif uploaded_file.type == "application/vnd.openxmlformats-officedocument.wordprocessingml.document":
        doc = docx.Document(uploaded_file)
        return "\n".join([p.text for p in doc.paragraphs])
    elif uploaded_file.type == "application/pdf":
        try:
            pdf_reader = PyPDF2.PdfReader(uploaded_file)
            text = ""
            for page in pdf_reader.pages:
                page_text = page.extract_text()
                if page_text:
                    text += page_text + "\n"
            # Basic formatting cleanup: remove extra blank lines and trailing spaces
            lines = [line.strip() for line in text.splitlines() if line.strip()]
            cleaned_text = "\n".join(lines)
            return cleaned_text
        except Exception as e:
            st.error(f"❌ Error reading PDF file: {str(e)}")
            return None
    else:
        st.error("❌ Unsupported file type.")
        return None

# Function to extract text from uploaded job description file
def extract_text_from_jd_file(uploaded_file):
    if uploaded_file.type == "text/plain":
        return uploaded_file.read().decode("utf-8")
    elif uploaded_file.type == "application/rtf":
        try:
            rtf_content = uploaded_file.read().decode("utf-8", errors="ignore")
            text = rtf_to_text(rtf_content)
            return text
        except Exception as e:
            st.error(f"❌ Error reading RTF file: {str(e)}")
            return None
    elif uploaded_file.type == "text/html":
        try:
            html_content = uploaded_file.read().decode("utf-8", errors="ignore")
            soup = BeautifulSoup(html_content, "html.parser")
            text = soup.get_text(separator="\n")
            return text
        except Exception as e:
            st.error(f"❌ Error reading HTML file: {str(e)}")
            return None
    elif uploaded_file.type == "application/vnd.openxmlformats-officedocument.wordprocessingml.document":
        doc = docx.Document(uploaded_file)
        return "\n".join([p.text for p in doc.paragraphs])
    elif uploaded_file.type == "application/pdf":
        try:
            pdf_reader = PyPDF2.PdfReader(uploaded_file)
            text = ""
            for page in pdf_reader.pages:
                text += page.extract_text() + "\n"
            return text
        except Exception as e:
            st.error(f"❌ Error reading PDF file: {str(e)}")
            return None
    else:
        st.error("❌ Unsupported file type.")
        return None

# New function to extract structured data from resume text
def extract_structured_data(resume_text: str) -> Dict[str, List[str]]:
    """
    Extract structured data such as contact info, education, and experience sections from resume text.
    Returns a dictionary with keys: 'contact_info', 'education', 'experience'.
    """
    structured_data = {
        "contact_info": [],
        "education": [],
        "experience": []
    }

    # Extract contact info (email, phone, LinkedIn)
    email_pattern = r'[\w\.-]+@[\w\.-]+\.\w+'
    # Improved phone pattern to capture more phone number formats including country code, spaces, dots, parentheses
    phone_pattern = r'(\+?\d{1,3}[\s.-]?)?(\(?\d{3}\)?[\s.-]?)?\d{3}[\s.-]?\d{4}'
    linkedin_pattern = r'(https?://)?(www\.)?linkedin\.com/in/[A-Za-z0-9_-]+'

    emails = re.findall(email_pattern, resume_text)
    phones = re.findall(phone_pattern, resume_text)
    linkedins = re.findall(linkedin_pattern, resume_text)

    structured_data["contact_info"].extend(set(emails))
    structured_data["contact_info"].extend(set(phones))
    structured_data["contact_info"].extend(set(linkedins))

    # Extract education section (simple heuristic: lines containing keywords)
    education_keywords = ['education', 'university', 'college', 'bachelor', 'master', 'phd', 'degree', 'school']
    education_lines = []
    lines = resume_text.lower().splitlines()
    for i, line in enumerate(lines):
        if any(keyword in line for keyword in education_keywords):
            # Collect lines around the keyword line (3 lines before and after)
            start = max(0, i - 3)
            end = min(len(lines), i + 4)
            education_lines.extend(lines[start:end])
    structured_data["education"] = list(set(education_lines))

    # Extract experience section (simple heuristic: lines containing keywords)
    experience_keywords = ['experience', 'work', 'employment', 'career', 'professional']
    experience_lines = []
    for i, line in enumerate(lines):
        if any(keyword in line for keyword in experience_keywords):
            # Avoid lines that contain education keywords to prevent overlap
            education_keywords = ['education', 'university', 'college', 'bachelor', 'master', 'phd', 'degree', 'school']
            if not any(edu_kw in line for edu_kw in education_keywords):
                start = max(0, i - 3)
                end = min(len(lines), i + 4)
                experience_lines.extend(lines[start:end])
    structured_data["experience"] = list(set(experience_lines))

    return structured_data



# Function to display analysis report with actions
def display_report(report_markdown, key_prefix=""):
    if not report_markdown or "Error:" in report_markdown:
        return

    report_id = f"report_{key_prefix}_{int(time.time())}"

    st.markdown('<div class="highlight">', unsafe_allow_html=True)
    st.markdown(f'<div id="{report_id}">', unsafe_allow_html=True)
    st.markdown(report_markdown, unsafe_allow_html=True)
    st.markdown('</div>', unsafe_allow_html=True)
    st.markdown('</div>', unsafe_allow_html=True)

    title = extract_title(report_markdown)
    filename_base = f"{title.replace(' ', '_')}_{datetime.now().strftime('%Y%m%d')}"

    col1, col2, col3 = st.columns(3)

    # Add accept/reject UI for suggestions if key_prefix is "analysis"
    if key_prefix == "analysis":
        pass

    with col2:
        st.markdown(
            f'<a href="data:file/txt;base64,{base64.b64encode(report_markdown.encode("utf-8")).decode()}" download="{filename_base}.md" style="display:inline-block;background-color:#4CAF50;color:white;padding:10px 20px;border-radius:5px;text-decoration:none;font-weight:bold;margin:5px 0 5px 0;">đŸ“Ĩ Download as Markdown</a>',
            unsafe_allow_html=True
        )

    with col3:
        st.markdown(
            f'<a href="data:file/txt;base64,{base64.b64encode(report_markdown.encode("utf-8")).decode()}" download="{filename_base}.txt" style="display:inline-block;background-color:#2196F3;color:white;padding:10px 20px;border-radius:5px;text-decoration:none;font-weight:bold;margin:5px 5px 5px 5px;">📄 Download as Text</a>',
            unsafe_allow_html=True
        )

    st.markdown("---")

    st.markdown("### 📊 Report Statistics")

    stats_data = {
        "Metric": ["Word Count", "Estimated Reading Time"],
        "Value": [
            count_words(report_markdown),
            estimate_reading_time(report_markdown)
        ]
    }

    stats_df = pd.DataFrame(stats_data)
    st.dataframe(stats_df, hide_index=True, use_container_width=True)

def main():
    # Dark mode styling
    st.markdown("""
    <style>
        .stApp {
            background-color: #121212;
            color: #E0E0E0;
        }
        .sidebar .sidebar-content {
            background-color: #1B5E20;
        }
        h1, h2, h3, h4, h5, h6 {
            color: #E0E0E0 !important;
        }
        .info-text {
            color: #BDBDBD !important;
        }
        .highlight {
            background-color: #1B5E20;
            border-left: 0.5rem solid #2E7D32;
        }
        .stDataFrame {
            background-color: #1B5E20;
        }
        div[data-testid="stExpander"] {
            background-color: #1B5E20;
        }
    </style>
    """, unsafe_allow_html=True)

    with st.sidebar:
        st.markdown("# 📄")
        st.markdown("## App Settings")
        st.markdown("### Model Settings")
        model_info = st.expander("Model Information", expanded=False)
        with model_info:
            st.markdown(f"""
            **Model**: {MODEL_NAME}

            This model is optimized for resume analysis and improvement suggestions.
            """)

        st.markdown("### Custom Instructions")
        custom_instructions = st.text_area(
            "Add custom instructions for the AI",
            placeholder="E.g., Focus on ATS compatibility, Emphasize skills gap, etc.",
            height=100
        )

        st.markdown("---")
        st.markdown("### About")
        st.markdown("""
        This app uses AI to analyze resumes and provide actionable improvement suggestions.

        Powered by Meta-Llama-3.3-70B-Instruct via SambaNova.
        """)

        st.markdown("---")
        st.markdown("v1.0.0 | 2025")

def main():
    # Dark mode styling
    st.markdown("""
    <style>
        .stApp {
            background-color: #121212;
            color: #E0E0E0;
        }
        .sidebar .sidebar-content {
            background-color: #1B5E20;
        }
        h1, h2, h3, h4, h5, h6 {
            color: #E0E0E0 !important;
        }
        .info-text {
            color: #BDBDBD !important;
        }
        .highlight {
            background-color: #1B5E20;
            border-left: 0.5rem solid #2E7D32;
        }
        .stDataFrame {
            background-color: #1B5E20;
        }
        div[data-testid="stExpander"] {
            background-color: #1B5E20;
        }
    </style>
    """, unsafe_allow_html=True)

    with st.sidebar:
        st.markdown("# 📄")
        st.markdown("## App Settings")
        st.markdown("### Model Settings")
        model_info = st.expander("Model Information", expanded=False)
        with model_info:
            st.markdown(f"""
            **Model**: {MODEL_NAME}

            This model is optimized for resume analysis and improvement suggestions.
            """)

        st.markdown("### Custom Instructions")
        custom_instructions = st.text_area(
            "Add custom instructions for the AI",
            placeholder="E.g., Focus on ATS compatibility, Emphasize skills gap, etc.",
            height=100
        )

    st.markdown('<h1 class="main-header">📄 Resume Analyzer</h1>', unsafe_allow_html=True)
    st.markdown('<p class="info-text">Upload your resume and get AI-powered analysis and improvement suggestions</p>', unsafe_allow_html=True)

    tab1, tab2, tab3 = st.tabs(["📤 Upload", "📋 Sample Demo", "â„šī¸ Help"])

    with tab1:
        st.markdown('<h2 class="sub-header">Upload Your Resume and/or Job Description</h2>', unsafe_allow_html=True)

        uploaded_resume = st.file_uploader(
            "📤 Upload resume (.pdf, .docx)",
            type=["pdf", "docx"],
            help="Upload a resume file to generate an analysis report"
        )

        jd_text_input = st.text_area(
            "Or paste job description text here (optional)",
            placeholder="Paste job description text from job portal or other source",
            height=150
        )

        uploaded_jd = st.file_uploader(
            "📤 Or upload job description file (.pdf, .docx) (optional)",
            type=["pdf", "docx"],
            help="Upload a job description file to tailor resume suggestions"
        )

        jd_text = None
        if jd_text_input.strip():
            jd_text = jd_text_input.strip()
        elif uploaded_jd:
            jd_text = extract_text_from_jd_file(uploaded_jd)

        processed_resume = None  # Initialize here to avoid NameError
        processed_jd = None  # Initialize here to avoid NameError

        if uploaded_resume:
            resume_text = extract_text_from_file(uploaded_resume)

            if resume_text:
                processed_resume = validate_and_process_resume(resume_text, 100000)  # No max length limit now
                if jd_text:
                    processed_jd = validate_and_process_resume(jd_text, 100000)

        if processed_jd:
            with st.expander("View Original Job Description Text", expanded=False):
                st.text_area("Job Description Content", processed_jd, height=300, disabled=True, key="job_desc_content")

        if processed_resume:
            if st.button("🔍 Generate Analysis", key="generate_analysis", use_container_width=True):
                with st.spinner("Generating analysis report..."):
                    if processed_jd:
                        report_markdown = analyze_jd_and_resume(processed_resume, processed_jd, custom_instructions)
                    else:
                        report_markdown = call_llama(processed_resume, custom_instructions)
                    display_report(report_markdown, "analysis")

    with tab2:
        st.markdown('<h2 class="sub-header">Try with Sample Resume</h2>', unsafe_allow_html=True)
        st.markdown('<p class="info-text">See how the app works with this sample resume text</p>', unsafe_allow_html=True)

        SAMPLE_RESUME = """
John Doe
123 Main St, Anytown, USA
john.doe@example.com | (555) 123-4567 | linkedin.com/in/johndoe

Professional Summary
Experienced software engineer with a strong background in developing scalable web applications and working with AI technologies.

Skills
- Python, JavaScript, React, Streamlit
- Machine Learning, NLP, Data Analysis
- Cloud Computing, Docker, Kubernetes

Experience
Software Engineer at TechCorp (2018 - Present)
- Developed and maintained web applications using React and Python.
- Led migration to cloud infrastructure, improving scalability by 30%.
- Collaborated with cross-functional teams to deliver AI-powered features.

Education
Bachelor of Science in Computer Science, University of Somewhere (2014 - 2018)

Certifications
- AWS Certified Solutions Architect
- Certified Scrum Master
"""

        with st.expander("View Sample Resume Text", expanded=False):
            st.text_area("Sample Resume Content", SAMPLE_RESUME, height=300, disabled=True, key="sample_resume_content")

        st.info(f"📊 Sample resume: {count_words(SAMPLE_RESUME)} words, ~{estimate_reading_time(SAMPLE_RESUME)} minute read")

        if st.button("🔍 Generate Sample Analysis", key="generate_sample", use_container_width=True):
            with st.spinner("Generating analysis report..."):
                report_markdown = call_llama(SAMPLE_RESUME, custom_instructions)
                display_report(report_markdown, "sample")

    with tab3:
        st.markdown('<h2 class="sub-header">How to Use This App</h2>', unsafe_allow_html=True)

        with st.expander("📤 Upload Tab", expanded=True):
            st.markdown("""
            1. Upload your resume as a .pdf or .docx file
            2. Review the word count and estimated reading time
            3. Click "Generate Analysis" to process the resume
            4. View, copy, or download the generated report
            """)

        with st.expander("📋 Sample Demo Tab", expanded=False):
            st.markdown("""
            1. View the sample resume text to understand the expected format
            2. Click "Generate Sample Analysis" to see how the app works
            3. Use this as a reference for your own resumes
            """)

        with st.expander("đŸ› ī¸ Advanced Features", expanded=False):
            st.markdown("""
            - **Custom Instructions**: Add specific instructions for the AI in the sidebar
            - **Max Length**: Adjust the maximum resume length to process (up to 100,000 words)
            - **Report Statistics**: View word count and estimated reading time for your report
            """)

        with st.expander("📄 Supported File Formats", expanded=False):
            st.markdown("""
            - PDF (.pdf)
            - Microsoft Word (.docx)
            """)

        with st.expander("❓ Troubleshooting", expanded=False):
            st.markdown("""
            **Common Issues:**

            1. **Analysis generation fails**
               - Try with a shorter resume
               - Check your internet connection
               - Wait a few minutes and try again
            2. **File upload issues**
               - Ensure your file is in .pdf, .txt, or .docx format
               - Try saving your file with a different encoding (UTF-8 recommended)
               - Check if the file is not corrupted
            3. **Download issues**
               - Try using a different browser
               - Check if your browser allows downloads
            """)

    st.markdown('<div class="footer">Last updated: 2025</div>', unsafe_allow_html=True)

if __name__ == "__main__":
    try:
        main()
    except Exception as e:
        st.error(f"❌ Application Error: {str(e)}")
        st.info("💡 Please refresh the page and try again. If the problem persists, contact support.")