Releases: khive-ai/lionagi
v0.9.1
LionAGI v0.9.1 Release Notes
This minor release (v0.9.1) builds upon v0.9.0, enhancing the ReAct functionality to support more iterative and streaming workflows. We also provide additional parameters for controlling intermediate results and reasoning depth.
1. Enhanced ReAct Iteration & Streaming
ReActStream
: A new function that yields partial analysis or sub-results each step, enabling you to observe the chain-of-thought or intermediate states in real time.- Parameters:
reasoning_effort
: "low", "medium", or "high", influencing thoroughness.intermediate_response_options
: Configures how partial intermediate results are stored/validated.display_as
: "json" vs. "yaml" for final output style.
2. Branch.ReAct
& Branch.ReActStream
- The Branch class now offers a streaming version (
ReActStream
) if you want partial iteration logs, plus a standardReAct
for simpler usage. - Integration with
as_readable
supports either JSON or YAML-like printing.
3. Additional Adjustments
- Notebook updates in
notebooks/react.ipynb
to demonstrate iterative usage. - Minor changes in
interpret
or parse modules for improved synergy with ReAct loops. - Version: Bumped from
0.9.0
to0.9.1
.
4. Compatibility & Notes
- This release is backward-compatible with v0.9.0.
- If you rely on older “format_curly” usage, you can now do
display_as="yaml"
for a similar effect. - For advanced partial chaining, see the new ReAct streaming in examples.
We hope you find these new iterative ReAct improvements helpful in creating more dynamic, stepwise intelligence flows. If you have any issues or suggestions, feel free to open a ticket or join our discussions.
What's Changed
- Dev r1 by @ohdearquant in #551
Full Changelog: v0.9.0...v0.9.1
v0.9.0
LionAGI v0.9.0 Release Notes
This release introduces significant system prompt enhancements and display improvements for nested data (via as_readable
), along with various refinements across the codebase. Below are the highlights and key changes:
1. System Prompt Enhancement
lionagi/session/prompts.py
now provides an optional Lion system message that can be prepended automatically via a newuse_lion_system_message
flag inBranch.__init__()
.- This message outlines LionAGI’s fundamental concepts (branches, sessions, tools, etc.) and is automatically inserted if desired.
2. Displaying Nested Data with as_readable
as_readable
andformat_dict
are now more robust, enabling:- YAML-like formatting (
format_curly=True
) for nested dictionaries and lists, with multi-line strings shown as “block scalars.” - Pretty-printed JSON (the default if
format_curly=False
). - Optional Markdown code fences if
md=True
.
- YAML-like formatting (
- Simplified logic for multi-item output (lists of items) and single items.
3. ReAct
Flow & Verbose Output
ReAct.py
leveragesas_readable
to print chain-of-thought or intermediate analyses in either JSON or YAML style.- New parameter
verbose_length
can truncate extremely long outputs to keep console logs readable.
4. Interpret & Parse Enhancements
interpret(...)
can now take an optionalinterpret_model
to override the default chat model.- Minor improvements in
parse.py
to handle partial LLM results if an operation is interrupted (InterruptedError
,CancelledError
).
5. Canceled Calls in APICalling
APICalling
now gracefully catches and re-raisesasyncio.CancelledError
to stop tasks immediately.- Ensures partial error/log states are recorded properly if a user or environment cancels a request.
6. Other Notable Updates
- Version Bump: from
0.8.8
to0.9.0
. - OpenAI chat completions: Additional check for
o1-2024-12-17
to remove certain parameters and switch role from “system” to “developer”. - Minor debug statements and concurrency clarifications.
Upgrade Notes & Compatibility
- No breaking API changes beyond the new optional
use_lion_system_message
. Existing flows remain compatible. - If you use custom code for printing or hooking chain-of-thought logs, you can now leverage the new
verbose_length
parameter inReAct
for convenience.
Thank You for Upgrading!
We hope these improvements make your LionAGI experience more robust and transparent. If you encounter issues or have feedback, please file a ticket or join our community discussions.
What's Changed
- Update system prompt by @ohdearquant in #547
- chore: update copyright year to 2025 in multiple files by @ohdearquant in #550
Full Changelog: v0.8.8...v0.9.0
v0.8.8
Release Note (v0.8.8)
Highlights:
-
Enhanced Streaming Support
APICalling
now offers an integratedstream()
method, allowing partial responses to be read in real time.- The
Processor
can differentiate between streaming and normal requests. Users can setevent.streaming = True
to handle chunked data or partial JSON responses.
-
Rate Limiting & Concurrency
- Unified approach to rate limiting for both streaming and non-streaming requests.
- New
concurrency_limit
parameter ensures only a specified number of streams run in parallel.
-
Removed
litellm
Dependency- Eliminated references to
litellm
inchat_completion.py
and other modules. pyproject.toml
updated to reflect the removal, streamlining environment setup.
- Eliminated references to
-
Other Improvements
- Minor file reorganizations, including moving file-based tool logic into
lionagi/tools/file/...
. - Improved error handling around streaming requests (e.g., gracefully handling JSON parse errors).
- Version bumped from
0.8.7
to0.8.8
to capture these improvements.
- Minor file reorganizations, including moving file-based tool logic into
Upgrade Notes:
- Uninstall
litellm
if it was explicitly installed; it is no longer needed. - When using streaming, ensure your code checks if an event is
streaming
and callsevent.stream()
(or the updatedimodel.stream()
) appropriately. - No breaking changes for standard invocation usage, but verify any direct references to
litellm
in your code are removed.
Recommended Next Steps:
- Explore the new streaming mechanics in
APICalling
for partial response handling. - If you have custom concurrency requirements, leverage the new
concurrency_limit
param inRateLimitedAPIExecutor
for streaming calls.
What's Changed
- Updated imodel2 by @ohdearquant in #544
- remove litellm dependency by @ohdearquant in #545
Full Changelog: v0.8.7...v0.8.8
v0.8.7
Updated default ReAct
setting
- updated default
extension_allowed=True
fromFalse
- updated default
max_extensions=3
fromNone
- updated maximum ReAct loop to be
100
, use with caution
What's Changed
- docs: update README for clarity and structure by @ohdearquant in #541
- Update react by @ohdearquant in #543
Full Changelog: v0.8.6...v0.8.7
v0.8.6
Release Note
Release 0.8.6
- ReAct Verbosity Enhancements
- A new
verbose_analysis
flag shows detailed per-round reasoning and expansions in the ReAct loop. - Each action invocation can now emit logs with the
verbose_action
parameter, providing clearer debugging output.
- A new
- Action Strategy & Batching
- The LLM can specify how it wants to run multiple actions in one round (
concurrent
,sequential
, orbatch
) via theaction_strategy
field. - If
batch
is chosen, theaction_batch_size
indicates how many actions to run in parallel per batch.
- The LLM can specify how it wants to run multiple actions in one round (
- Minor Fixes & Cleanups
- Improved docstrings, type hints, and structured error logging for tool calls.
- Adjusted version to
0.8.6
.
Users who wish to see detailed chain-of-thought expansions should set verbose=True
in their ReAct calls.
What's Changed
- Add verbose to reAct operation by @ohdearquant in #540
Full Changelog: v0.8.5...v0.8.6
v0.8.5
Release Note (v0.8.5)
Enhancements:
- Added request_options to
EndpointConfig
, enabling endpoints to specify a Pydantic model for typed requests. - Perplexity chat completions are now supported with a new
PerplexityChatCompletionRequest
. iModel
objects can query the endpoint for its typed request model viarequest_options
.
Other Changes:
- Extended
ReAct
to accept optional interpret parameters (interpret_domain
,interpret_style
,interpret_sample
, etc.). - Incremented version from 0.8.4 to 0.8.5.
Upgrade Notes:
- No breaking changes. Existing endpoints remain compatible.
- Projects can optionally start using
request_options
on newly-updated endpoints (like Perplexity) for strongly-typed requests.
What's Changed
- Update imodel by @ohdearquant in #539
Full Changelog: v0.8.4...v0.8.5
v0.8.4
Release Note (v0.8.4)
Highlights:
- Dynamic
connect()
A major enhancement on theBranch
class that allows code to register new tools in real time. This feature drastically simplifies integrating iModel-based endpoints without manually defining separate function wrappers.
Key Benefits:
- Low-code Integration: Connect brand-new endpoints with minimal lines of code.
- Flexible Overwrites: Update existing tool definitions with
update=True
. - Enhanced Extensibility: Perfect for dynamic multi-model experiments or modular expansions of the AI system.
Upgrade Steps:
- Pull/Install:
pip install --upgrade lionagi==0.8.4
- Refactor: Use
branch.connect()
to define new endpoint-based tools. - Review: Ensure any existing named tool is updated only if passing
update=True
.
Compatibility:
- Non-breaking update. No config or code changes required except for using the new feature if desired.
Happy coding with LionAGI v0.8.4!
What's Changed
- Adding connect by @ohdearquant in #538
Full Changelog: v0.8.3...v0.8.4
v0.8.3
Release Notes (v0.8.3)
-
Improved ReAct Flow:
- Refined how
ReAct.py
checks and triggers additional extension rounds. - More intuitive messages in
ReActAnalysis
, helping guide expansions if needed.
- Refined how
-
Enhanced Tool Argument Validation:
function_calling.py
andtool.py
ensure that Pydantic-based schemas are applied correctly, preventing accidental mismatches in arguments.
-
Streamlined Text Output:
format_text_content
ininstruction.py
now produces more concise, token-friendly content while retaining essential context.
-
Version Bump:
- Incremented
lionagi
version to0.8.3
, signifying these feature refinements and QA improvements.
- Incremented
No breaking changes are expected; existing workflows that rely on these classes and functions should continue to work with minimal adjustments.
What's Changed
- Updated tools by @ohdearquant in #537
Full Changelog: v0.8.2...v0.8.3
v0.8.2
Added:
new_model
method toOperableModel
What's Changed
- Update operable model by @ohdearquant in #536
Full Changelog: v0.8.1...v0.8.2
v0.8.1
Fixed:
Streaming in operations
What's Changed
- Fix stream by @ohdearquant in #535
Full Changelog: v0.8.0...v0.8.1