Skip to content

Commit

Permalink
Fix errors and improve Doc (#143)
Browse files Browse the repository at this point in the history
* Fix link issues and add icons

* Improve Doc

* fix test

* making minor modifications to shuguangs' doc changes

---------

Co-authored-by: Salman Paracha <[email protected]>
Co-authored-by: Adil Hafeez <[email protected]>
  • Loading branch information
3 people authored Oct 8, 2024
1 parent 3ed50e6 commit b30ad79
Show file tree
Hide file tree
Showing 27 changed files with 396 additions and 329 deletions.
26 changes: 13 additions & 13 deletions docs/source/build_with_arch/agent.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ claims to creating ad campaigns - via prompts.

Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with
the user to gather any missing parameters and makes API calls so that you can focus on writing business logic.
Arch does this via its purpose-built :ref:`Arch-FC LLM <function_calling>` - the fastest (200ms p90 - 10x faser than GPT-4o)
Arch does this via its purpose-built :ref:`Arch-Function <function_calling>` - the fastest (200ms p90 - 10x faser than GPT-4o)
and cheapest (100x than GPT-40) function-calling LLM that matches performance with frontier models.

.. image:: includes/agent/function-calling-flow.jpg
Expand All @@ -25,17 +25,17 @@ In the most common scenario, users will request a single action via prompts, and
request by extracting relevant parameters, validating the input, and calling the designated function or API. Here
is how you would go about enabling this scenario with Arch:

Step 1: Define prompt targets with functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Step 1: Define Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. literalinclude:: includes/agent/function-calling-agent.yaml
:language: yaml
:linenos:
:emphasize-lines: 16-37
:caption: Define prompt targets that can enable users to engage with API and backened functions of an app
:emphasize-lines: 21-34
:caption: Prompt Target Example Configuration

Step 2: Process request parameters in Flask
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Step 2: Process Request Parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Once the prompt targets are configured as above, handling those parameters is

Expand All @@ -44,8 +44,8 @@ Once the prompt targets are configured as above, handling those parameters is
:linenos:
:caption: Parameter handling with Flask

Parallel/ Multiple Function Calling
-----------------------------------
Parallel & Multiple Function Calling
------------------------------------
In more complex use cases, users may request multiple actions or need multiple APIs/functions to be called
simultaneously or sequentially. With Arch, you can handle these scenarios efficiently using parallel or multiple
function calling. This allows your application to engage in a broader range of interactions, such as updating
Expand All @@ -54,8 +54,8 @@ different datasets, triggering events across systems, or collecting results from
Arch-FC1B is built to manage these parallel tasks efficiently, ensuring low latency and high throughput, even
when multiple functions are invoked. It provides two mechanisms to handle these cases:

Step 1: Define Multiple Function Targets
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Step 1: Define Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

When enabling multiple function calling, define the prompt targets in a way that supports multiple functions or
API calls based on the user's prompt. These targets can be triggered in parallel or sequentially, depending on
Expand All @@ -66,5 +66,5 @@ Example of Multiple Prompt Targets in YAML:
.. literalinclude:: includes/agent/function-calling-agent.yaml
:language: yaml
:linenos:
:emphasize-lines: 16-37
:caption: Define prompt targets that can enable users to engage with API and backened functions of an app
:emphasize-lines: 21-34
:caption: Prompt Target Example Configuration
Original file line number Diff line number Diff line change
@@ -1,39 +1,36 @@
version: "0.1-beta"
listen:
address: 127.0.0.1 | 0.0.0.0
port_value: 8080 #If you configure port 443, you'll need to update the listener with tls_certificates
version: v0.1

system_prompt: |
You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
listen:
address: 0.0.0.0 # or 127.0.0.1
port: 10000
# Defines how Arch should parse the content from application/json or text/pain Content-type in the http request
message_format: huggingface

# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
llm_providers:
- name: "OpenAI"
provider: "openai"
- name: OpenAI
provider: openai
access_key: OPENAI_API_KEY
model: gpt-4o
default: true
stream: true

# default system prompt used by all prompt targets
system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.

prompt_targets:
- name: reboot_devices
description: >
This prompt target handles user requests to reboot devices.
It ensures that when users request to reboot specific devices or device groups, the system processes the reboot commands accurately.
**Examples of user prompts:**
- "Please reboot device 12345."
- "Restart all devices in tenant group tenant-XYZ
- "I need to reboot devices A, B, and C."
description: Reboot specific devices or device groups

path: /agent/device_reboot
parameters:
- name: "device_ids"
type: list # Options: integer | float | list | dictionary | set
description: "A list of device identifiers (IDs) to reboot."
- name: device_ids
type: list
description: A list of device identifiers (IDs) to reboot.
required: false
- name: "device_group"
type: string # Options: string | integer | float | list | dictionary | set
description: "The name of the device group to reboot."
- name: device_group
type: str
description: The name of the device group to reboot
required: false

# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.
Expand All @@ -42,6 +39,6 @@ endpoints:
# value could be ip address or a hostname with port
# this could also be a list of endpoints for load balancing
# for example endpoint: [ ip1:port, ip2:port ]
endpoint: "127.0.0.1:80"
endpoint: 127.0.0.1:80
# max time to wait for a connection to be established
connect_timeout: 0.005s
26 changes: 14 additions & 12 deletions docs/source/build_with_arch/includes/agent/parameter_handling.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,40 +2,42 @@

app = Flask(__name__)

@app.route('/agent/device_summary', methods=['POST'])

@app.route("/agent/device_summary", methods=["POST"])
def get_device_summary():
"""
Endpoint to retrieve device statistics based on device IDs and an optional time range.
"""
data = request.get_json()

# Validate 'device_ids' parameter
device_ids = data.get('device_ids')
device_ids = data.get("device_ids")
if not device_ids or not isinstance(device_ids, list):
return jsonify({'error': "'device_ids' parameter is required and must be a list"}), 400
return jsonify(
{"error": "'device_ids' parameter is required and must be a list"}
), 400

# Validate 'time_range' parameter (optional, defaults to 7)
time_range = data.get('time_range', 7)
time_range = data.get("time_range", 7)
if not isinstance(time_range, int):
return jsonify({'error': "'time_range' must be an integer"}), 400
return jsonify({"error": "'time_range' must be an integer"}), 400

# Simulate retrieving statistics for the given device IDs and time range
# In a real application, you would query your database or external service here
statistics = []
for device_id in device_ids:
# Placeholder for actual data retrieval
stats = {
'device_id': device_id,
'time_range': f'Last {time_range} days',
'data': f'Statistics data for device {device_id} over the last {time_range} days.'
"device_id": device_id,
"time_range": f"Last {time_range} days",
"data": f"Statistics data for device {device_id} over the last {time_range} days.",
}
statistics.append(stats)

response = {
'statistics': statistics
}
response = {"statistics": statistics}

return jsonify(response), 200

if __name__ == '__main__':

if __name__ == "__main__":
app.run(debug=True)
75 changes: 42 additions & 33 deletions docs/source/build_with_arch/includes/rag/intent_detection.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
# Global dictionary to keep track of user memories
user_memories = {}


def get_user_conversation(user_id):
"""
Retrieve the user's conversation memory using LangChain.
Expand All @@ -19,6 +20,7 @@ def get_user_conversation(user_id):
user_memories[user_id] = ConversationBufferMemory(return_messages=True)
return user_memories[user_id]


def update_user_conversation(user_id, client_messages, intent_changed):
"""
Update the user's conversation memory with new messages using LangChain.
Expand All @@ -34,33 +36,34 @@ def update_user_conversation(user_id, client_messages, intent_changed):

# Process each new message
for index, message in enumerate(new_messages):
role = message.get('role')
content = message.get('content')
role = message.get("role")
content = message.get("content")
metadata = {
'uuid': str(uuid.uuid4()),
'timestamp': datetime.utcnow().isoformat(),
'intent_changed': False # Default value
"uuid": str(uuid.uuid4()),
"timestamp": datetime.utcnow().isoformat(),
"intent_changed": False, # Default value
}

# Mark the intent change on the last message if detected
if intent_changed and index == len(new_messages) - 1:
metadata['intent_changed'] = True
metadata["intent_changed"] = True

# Create a new message with metadata
if role == 'user':
if role == "user":
memory.chat_memory.add_message(
HumanMessage(content=content, additional_kwargs={'metadata': metadata})
HumanMessage(content=content, additional_kwargs={"metadata": metadata})
)
elif role == 'assistant':
elif role == "assistant":
memory.chat_memory.add_message(
AIMessage(content=content, additional_kwargs={'metadata': metadata})
AIMessage(content=content, additional_kwargs={"metadata": metadata})
)
else:
# Handle other roles if necessary
pass

return memory


def get_messages_since_last_intent(messages):
"""
Retrieve messages from the last intent change onwards using LangChain.
Expand All @@ -69,20 +72,22 @@ def get_messages_since_last_intent(messages):
for message in reversed(messages):
# Insert message at the beginning to maintain correct order
messages_since_intent.insert(0, message)
metadata = message.additional_kwargs.get('metadata', {})
metadata = message.additional_kwargs.get("metadata", {})
# Break if intent_changed is True
if metadata.get('intent_changed', False) == True:
if metadata.get("intent_changed", False) == True:
break

return messages_since_intent


def forward_to_llm(messages):
"""
Forward messages to an upstream LLM using LangChain.
"""
# Convert messages to a conversation string
conversation = ""
for message in messages:
role = 'User' if isinstance(message, HumanMessage) else 'Assistant'
role = "User" if isinstance(message, HumanMessage) else "Assistant"
content = message.content
conversation += f"{role}: {content}\n"
# Use LangChain's LLM to get a response. This call is proxied through Arch for end-to-end observability and traffic management
Expand All @@ -92,28 +97,31 @@ def forward_to_llm(messages):
response = llm(prompt)
return response

@app.route('/process_rag', methods=['POST'])

@app.route("/process_rag", methods=["POST"])
def process_rag():
# Extract JSON data from the request
data = request.get_json()

user_id = data.get('user_id')
user_id = data.get("user_id")
if not user_id:
return jsonify({'error': 'User ID is required'}), 400
return jsonify({"error": "User ID is required"}), 400

client_messages = data.get('messages')
client_messages = data.get("messages")
if not client_messages or not isinstance(client_messages, list):
return jsonify({'error': 'Messages array is required'}), 400
return jsonify({"error": "Messages array is required"}), 400

# Extract the intent change marker from Arch's headers if present for the current prompt
intent_changed_header = request.headers.get('x-arch-intent-marker', '').lower()
if intent_changed_header in ['', 'false']:
intent_changed_header = request.headers.get("x-arch-intent-marker", "").lower()
if intent_changed_header in ["", "false"]:
intent_changed = False
elif intent_changed_header == 'true':
elif intent_changed_header == "true":
intent_changed = True
else:
# Invalid value provided
return jsonify({'error': 'Invalid value for x-arch-prompt-intent-change header'}), 400
return jsonify(
{"error": "Invalid value for x-arch-prompt-intent-change header"}
), 400

# Update user conversation based on intent change
memory = update_user_conversation(user_id, client_messages, intent_changed)
Expand All @@ -127,26 +135,27 @@ def process_rag():
# Prepare the messages to return
messages_to_return = []
for message in memory.chat_memory.messages:
role = 'user' if isinstance(message, HumanMessage) else 'assistant'
role = "user" if isinstance(message, HumanMessage) else "assistant"
content = message.content
metadata = message.additional_kwargs.get('metadata', {})
metadata = message.additional_kwargs.get("metadata", {})
message_entry = {
'uuid': metadata.get('uuid'),
'timestamp': metadata.get('timestamp'),
'role': role,
'content': content,
'intent_changed': metadata.get('intent_changed', False)
"uuid": metadata.get("uuid"),
"timestamp": metadata.get("timestamp"),
"role": role,
"content": content,
"intent_changed": metadata.get("intent_changed", False),
}
messages_to_return.append(message_entry)

# Prepare the response
response = {
'user_id': user_id,
'messages': messages_to_return,
'llm_response': llm_response
"user_id": user_id,
"messages": messages_to_return,
"llm_response": llm_response,
}

return jsonify(response), 200

if __name__ == '__main__':

if __name__ == "__main__":
app.run(debug=True)
Loading

0 comments on commit b30ad79

Please sign in to comment.