Tags: salimwp/langchain
Tags
docs: tool-use use case (langchain-ai#15783) Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
langchain[patch]: bump community >=0.0.8,<0.1 (langchain-ai#15492)
add methods to deserialize prompts that were old (langchain-ai#14857)
docs[patch]: `google` platform page update (langchain-ai#14475) Added missed tools --------- Co-authored-by: Erick Friis <erickfriis@gmail.com>
experimental[patch]: SmartLLMChain Output Key Customization (langchai… …n-ai#14466) **Description** The `SmartLLMChain` was was fixed to output key "resolution". Unfortunately, this prevents the ability to use multiple `SmartLLMChain` in a `SequentialChain` because of colliding output keys. This change simply gives the option the customize the output key to allow for sequential chaining. The default behavior is the same as the current behavior. Now, it's possible to do the following: ``` from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain_experimental.smart_llm import SmartLLMChain from langchain.chains import SequentialChain joke_prompt = PromptTemplate( input_variables=["content"], template="Tell me a joke about {content}.", ) review_prompt = PromptTemplate( input_variables=["scale", "joke"], template="Rate the following joke from 1 to {scale}: {joke}" ) llm = ChatOpenAI(temperature=0.9, model_name="gpt-4-32k") joke_chain = SmartLLMChain(llm=llm, prompt=joke_prompt, output_key="joke") review_chain = SmartLLMChain(llm=llm, prompt=review_prompt, output_key="review") chain = SequentialChain( chains=[joke_chain, review_chain], input_variables=["content", "scale"], output_variables=["review"], verbose=True ) response = chain.run({"content": "chickens", "scale": "10"}) print(response) ``` --------- Co-authored-by: Erick Friis <erick@langchain.dev>
PreviousNext