8000 Releases · khive-ai/lionagi · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Releases: khive-ai/lionagi

v0.9.1

25 Jan 03:13
a9af28c
Compare
Choose a tag to compare

LionAGI v0.9.1 Release Notes

This minor release (v0.9.1) builds upon v0.9.0, enhancing the ReAct functionality to support more iterative and streaming workflows. We also provide additional parameters for controlling intermediate results and reasoning depth.


1. Enhanced ReAct Iteration & Streaming

  • ReActStream: A new function that yields partial analysis or sub-results each step, enabling you to observe the chain-of-thought or intermediate states in real time.
  • Parameters:
    • reasoning_effort: "low", "medium", or "high", influencing thoroughness.
    • intermediate_response_options: Configures how partial intermediate results are stored/validated.
    • display_as: "json" vs. "yaml" for final output style.

2. Branch.ReAct & Branch.ReActStream

  • The Branch class now offers a streaming version (ReActStream) if you want partial iteration logs, plus a standard ReAct for simpler usage.
  • Integration with as_readable supports either JSON or YAML-like printing.

3. Additional Adjustments

  • Notebook updates in notebooks/react.ipynb to demonstrate iterative usage.
  • Minor changes in interpret or parse modules for improved synergy with ReAct loops.
  • Version: Bumped from 0.9.0 to 0.9.1.

4. Compatibility & Notes

  • This release is backward-compatible with v0.9.0.
  • If you rely on older “format_curly” usage, you can now do display_as="yaml" for a similar effect.
  • For advanced partial chaining, see the new ReAct streaming in examples.

We hope you find these new iterative ReAct improvements helpful in creating more dynamic, stepwise intelligence flows. If you have any issues or suggestions, feel free to open a ticket or join our discussions.

What's Changed

Full Changelog: v0.9.0...v0.9.1

v0.9.0

25 Jan 00:08
13af417
Compare
Choose a tag to compare

LionAGI v0.9.0 Release Notes

This release introduces significant system prompt enhancements and display improvements for nested data (via as_readable), along with various refinements across the codebase. Below are the highlights and key changes:


1. System Prompt Enhancement

  • lionagi/session/prompts.py now provides an optional Lion system message that can be prepended automatically via a new use_lion_system_message flag in Branch.__init__().
  • This message outlines LionAGI’s fundamental concepts (branches, sessions, tools, etc.) and is automatically inserted if desired.

2. Displaying Nested Data with as_readable

  • as_readable and format_dict are now more robust, enabling:
    • YAML-like formatting (format_curly=True) for nested dictionaries and lists, with multi-line strings shown as “block scalars.”
    • Pretty-printed JSON (the default if format_curly=False).
    • Optional Markdown code fences if md=True.
  • Simplified logic for multi-item output (lists of items) and single items.

3. ReAct Flow & Verbose Output

  • ReAct.py leverages as_readable to print chain-of-thought or intermediate analyses in either JSON or YAML style.
  • New parameter verbose_length can truncate extremely long outputs to keep console logs readable.

4. Interpret & Parse Enhancements

  • interpret(...) can now take an optional interpret_model to override the default chat model.
  • Minor improvements in parse.py to handle partial LLM results if an operation is interrupted (InterruptedError, CancelledError).

5. Canceled Calls in APICalling

  • APICalling now gracefully catches and re-raises asyncio.CancelledError to stop tasks immediately.
  • Ensures partial error/log states are recorded properly if a user or environment cancels a request.

6. Other Notable Updates

  • Version Bump: from 0.8.8 to 0.9.0.
  • OpenAI chat completions: Additional check for o1-2024-12-17 to remove certain parameters and switch role from “system” to “developer”.
  • Minor debug statements and concurrency clarifications.

Upgrade Notes & Compatibility

  • No breaking API changes beyond the new optional use_lion_system_message. Existing flows remain compatible.
  • If you use custom code for printing or hooking chain-of-thought logs, you can now leverage the new verbose_length parameter in ReAct for convenience.

Thank You for Upgrading!

We hope these improvements make your LionAGI experience more robust and transparent. If you encounter issues or have feedback, please file a ticket or join our community discussions.

What's Changed

Full Changelog: v0.8.8...v0.9.0

v0.8.8

23 Jan 15:44
7785af2
Compare
Choose a tag to compare

Release Note (v0.8.8)

Highlights:

  1. Enhanced Streaming Support

    • APICalling now offers an integrated stream() method, allowing partial responses to be read in real time.
    • The Processor can differentiate between streaming and normal requests. Users can set event.streaming = True to handle chunked data or partial JSON responses.
  2. Rate Limiting & Concurrency

    • Unified approach to rate limiting for both streaming and non-streaming requests.
    • New concurrency_limit parameter ensures only a specified number of streams run in parallel.
  3. Removed litellm Dependency

    • Eliminated references to litellm in chat_completion.py and other modules.
    • pyproject.toml updated to reflect the removal, streamlining environment setup.
  4. Other Improvements

    • Minor file reorganizations, including moving file-based tool logic into lionagi/tools/file/....
    • Improved error handling around streaming requests (e.g., gracefully handling JSON parse errors).
    • Version bumped from 0.8.7 to 0.8.8 to capture these improvements.

Upgrade Notes:

  • Uninstall litellm if it was explicitly installed; it is no longer needed.
  • When using streaming, ensure your code checks if an event is streaming and calls event.stream() (or the updated imodel.stream()) appropriately.
  • No breaking changes for standard invocation usage, but verify any direct references to litellm in your code are removed.

Recommended Next Steps:

  • Explore the new streaming mechanics in APICalling for partial response handling.
  • If you have custom concurrency requirements, leverage the new concurrency_limit param in RateLimitedAPIExecutor for streaming calls.

What's Changed

Full Changelog: v0.8.7...v0.8.8

v0.8.7

23 Jan 05:54
db8e5b1
Compare
Choose a tag to compare

Updated default ReAct setting

  • updated default extension_allowed=True from False
  • updated default max_extensions=3 from None
  • updated maximum ReAct loop to be 100, use with caution

What's Changed

Full Changelog: v0.8.6...v0.8.7

v0.8.6

21 Jan 01:41
ab0dfab
Compare
Choose a tag to compare

Release Note

Release 0.8.6

  • ReAct Verbosity Enhancements
    • A new verbose_analysis flag shows detailed per-round reasoning and expansions in the ReAct loop.
    • Each action invocation can now emit logs with the verbose_action parameter, providing clearer debugging output.
  • Action Strategy & Batching
    • The LLM can specify how it wants to run multiple actions in one round (concurrent, sequential, or batch) via the action_strategy field.
    • If batch is chosen, the action_batch_size indicates how many actions to run in parallel per batch.
  • Minor Fixes & Cleanups
    • Improved docstrings, type hints, and structured error logging for tool calls.
    • Adjusted version to 0.8.6.

Users who wish to see detailed chain-of-thought expansions should set verbose=True in their ReAct calls.

What's Changed

Full Changelog: v0.8.5...v0.8.6

v0.8.5

20 Jan 17:49
30c7b14
Compare
Choose a tag to compare

Release Note (v0.8.5)

Enhancements:

  • Added request_options to EndpointConfig, enabling endpoints to specify a Pydantic model for typed requests.
  • Perplexity chat completions are now supported with a new PerplexityChatCompletionRequest.
  • iModel objects can query the endpoint for its typed request model via request_options.

Other Changes:

  • Extended ReAct to accept optional interpret parameters (interpret_domain, interpret_style, interpret_sample, etc.).
  • Incremented version from 0.8.4 to 0.8.5.

Upgrade Notes:

  • No breaking changes. Existing endpoints remain compatible.
  • Projects can optionally start using request_options on newly-updated endpoints (like Perplexity) for strongly-typed requests.

What's Changed

Full Changelog: v0.8.4...v0.8.5

v0.8.4

20 Jan 06:41
20ca278
Compare
Choose a tag to compare

Release Note (v0.8.4)

Highlights:

  • Dynamic connect()
    A major enhancement on the Branch class that allows code to register new tools in real time. This feature drastically simplifies integrating iModel-based endpoints without manually defining separate function wrappers.

Key Benefits:

  • Low-code Integration: Connect brand-new endpoints with minimal lines of code.
  • Flexible Overwrites: Update existing tool definitions with update=True.
  • Enhanced Extensibility: Perfect for dynamic multi-model experiments or modular expansions of the AI system.

Upgrade Steps:

  1. Pull/Install: pip install --upgrade lionagi==0.8.4
  2. Refactor: Use branch.connect() to define new endpoint-based tools.
  3. Review: Ensure any existing named tool is updated only if passing update=True.

Compatibility:

  • Non-breaking update. No config or code changes required except for using the new feature if desired.

Happy coding with LionAGI v0.8.4!

What's Changed

Full Changelog: v0.8.3...v0.8.4

v0.8.3

19 Jan 23:21
39d044f
Compare
Choose a tag to compare

Release Notes (v0.8.3)

  • Improved ReAct Flow:

    • Refined how ReAct.py checks and triggers additional extension rounds.
    • More intuitive messages in ReActAnalysis, helping guide expansions if needed.
  • Enhanced Tool Argument Validation:

    • function_calling.py and tool.py ensure that Pydantic-based schemas are applied correctly, preventing accidental mismatches in arguments.
  • Streamlined Text Output:

    • format_text_content in instruction.py now produces more concise, token-friendly content while retaining essential context.
  • Version Bump:

    • Incremented lionagi version to 0.8.3, signifying these feature refinements and QA improvements.

No breaking changes are expected; existing workflows that rely on these classes and functions should continue to work with minimal adjustments.

What's Changed

Full Changelog: v0.8.2...v0.8.3

v0.8.2

19 Jan 19:04
85a0cc9
Compare
Choose a tag to compare

Added:

  • new_model method to OperableModel

What's Changed

Full Changelog: v0.8.1...v0.8.2

v0.8.1

19 Jan 16:04
713634c
Compare
Choose a tag to compare

Fixed:

Streaming in operations

What's Changed

Full Changelog: v0.8.0...v0.8.1

0