{% with messages = get_flashed_messages(with_categories=true) %} {% if messages %} {% for category, message in messages %}
{{ message }}
{% endfor %} {% endif %} {% endwith %}

LLM Settings

Select the default language model to use for analysis
{{ config.DEFAULT_TEMPERATURE }} Higher values = more creative, lower values = more deterministic
Maximum tokens for model output
Base URL for OpenRouter API

VLLM Configuration

Maximum new tokens for VLLM models
Top K sampling parameter
{{ config.VLLM_TOP_P }} Top P sampling parameter
{{ config.VLLM_TEMPERATURE }} Temperature for VLLM models

Search Settings

Choose your primary search and research source
Number of research cycles to perform
Follow-up questions in each cycle
Higher values give more comprehensive research but take longer
Results kept after relevance filtering
How to accumulate knowledge during research
Maximum context size for knowledge accumulation
Region code for search (e.g., us, uk, de)
Time period for search results
Language for search results
Enable quality filtering for search results

Report Settings

Number of searches to run per report section
Directory to save research outputs
Cancel
Show Raw Configuration