Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LangGraph Support #801

Closed
tylertitsworth opened this issue Mar 9, 2024 · 4 comments
Closed

LangGraph Support #801

tylertitsworth opened this issue Mar 9, 2024 · 4 comments
Assignees
Labels
enhancement New feature or request

Comments

@tylertitsworth
Copy link
Contributor

Is your feature request related to a problem? Please describe.
I'm trying to utilize LangGraph with Chainlit, and when I run my workflow I would like to see the Steps the graph takes, however, the step class can only be utilized in an async state, and the graph is constructed out of synchronous class objects.

Describe the solution you'd like
Given some on_message decorator function like so:

@cl.on_message
async def on_message(message: cl.Message):
    """Handle a message.

    Args:
        message (cl.Message): User prompt input
    """
    app = cl.user_session.get("app")
    # Currently functions, without steps
    res = await cl.make_async(app.invoke)({"keys": {"question": message.content}})

It results in the following outputs in my terminal: (Based on https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_self_rag_mistral_nomic.ipynb)

2024-03-08 22:10:08 - Use pytorch device_name: cpu
---RETRIEVE---
---CHECK RELEVANCE---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---DECIDE TO GENERATE---
---DECISION: GENERATE---
---GENERATE---
---GRADE GENERATION vs DOCUMENTS---
---DECISION: SUPPORTED, MOVE TO FINAL GRADE---
---FINAL GRADE---
---GRADE GENERATION vs QUESTION---
---DECISION: USEFUL---

However, Chainlit gives me the following:
image

A LangGraph is constructed with nodes and is then compiled into an application, here's my implementation:

def create_workflow(config, retriever):
    workflow = StateGraph(GraphState)

    from functools import partial

    retrieve_with_retriever = partial(retrieve, retriever=retriever)
    grade_documents_with_local_llm = partial(grade_documents, local_llm=config["model"])
    generate_with_local_llm = partial(generate, local_llm=config["model"])
    transform_query_with_local_llm = partial(transform_query, local_llm=config["model"])
    grade_generation_v_documents_with_local_llm = partial(
        grade_generation_v_documents, local_llm=config["model"]
    )
    grade_generation_v_question_with_local_llm = partial(
        grade_generation_v_question, local_llm=config["model"]
    )

    workflow.add_node("retrieve", retrieve_with_retriever)
    workflow.add_node("grade_documents", grade_documents_with_local_llm)
    workflow.add_node("generate", generate_with_local_llm)
    workflow.add_node("transform_query", transform_query_with_local_llm)
    workflow.add_node("prepare_for_final_grade", prepare_for_final_grade)

    workflow.set_entry_point("retrieve")
    workflow.add_edge("retrieve", "grade_documents")
    workflow.add_conditional_edges(
        "grade_documents",
        decide_to_generate,
        {
            "transform_query": "transform_query",
            "generate": "generate",
        },
    )
    workflow.add_edge("transform_query", "retrieve")
    workflow.add_conditional_edges(
        "generate",
        grade_generation_v_documents_with_local_llm,
        {
            "supported": "prepare_for_final_grade",
            "not supported": "generate",
        },
    )
    workflow.add_conditional_edges(
        "prepare_for_final_grade",
        grade_generation_v_question_with_local_llm,
        {
            "useful": END,
            "not useful": "transform_query",
        },
    )

    return workflow.compile()

For each node defined, a step should be generated with the name of that function, and the return value of that function. Here's what a node function might look like:

def decide_to_generate(state):
    """
    Determines whether to generate an answer, or re-generate a question.

    Args:
        state (dict): The current state of the agent, including all keys.

    Returns:
        str: Next node to call
    """

    print("---DECIDE TO GENERATE---")
    state_dict = state["keys"]
    question = state_dict["question"]
    filtered_documents = state_dict["documents"]

    if not filtered_documents:
        # All documents have been filtered check_relevance
        # We will re-generate a new query
        print("---DECISION: TRANSFORM QUERY---")
        return "transform_query"
    else:
        # We have relevant documents, so generate answer
        print("---DECISION: GENERATE---")
        return "generate"

In this method, the initial document retrievals would be reflected in the retrieve node, otherwise most steps would just return a string that represents the output sent to the graph's state machine.

Describe alternatives you've considered
Generating a DAG based on the graph configuration, or allowing some kind of manual method to define what I outlined above, rather than generating the steps for the user.

Additional context
n/a

@tylertitsworth tylertitsworth added the enhancement New feature or request label Mar 9, 2024
@tylertitsworth
Copy link
Contributor Author

I saw the langroid implementation that achieves a similar result: https://github.com/langroid/langroid/blob/main/langroid/agent/callbacks/chainlit.py#L566

@DhavalThkkar
Copy link

+1 on this. Definitely need something wrt Langgraph as the dev team at Langchain is heavily moving towards that

@3x10RaiseTo8
Copy link

The impact will be huge.

If project requires not streaming, it simple to implement. But such requirement is rarely the case. Streaming is a must. Here are a few thoughts.

  • LangChain is moving to event based architecture (astream_events), and so can LangGraph, as it is based on LangChain's Runnable Interface.
  • Events like on_chat_model_start, on_tool_start, etc are predefined in callback module of langchain_core package.
  • Projects like Streamlit implemented their LangChain support by defining their custom callback handler in langchain_community package
  • Although, bacause how LangGraph allows users flexiblity, defining callback handler are not as straight forward as there can be more and more complications defined by the user in the Graph.
  • We can still somewhat generalised it, like on_node_start, etc.

LangGraph is the sweet spot in the abstraction game. And Chainlit is the sweet spot for chatbot interface. Huge impact.

@ModEnter
Copy link
Collaborator

Hello, an example has been added in Chainlit/cookbook for langGraph support in this PR. Feel free to enhance it with functionalities !

@ModEnter ModEnter self-assigned this Sep 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Development

No branches or pull requests

4 participants