Loading
I recently tackled a challenge that I think a lot of us face: building a chatbot that can handle customer support queries without compromising user privacy.
Creating a chatbot that can understand and respond to user queries while also protecting Personally Identifiable Information (PII). It’s a trickier balance than you might expect!
My Solution: I built a chatbot using LangChain with some extra security features. Here’s what it can do:
Detect PII in user input (emails, phone numbers, etc.) Block processing of sensitive data Provide helpful responses without exposing private info
I’ve created a full tutorial with code examples in this Collab notebook: https://git.new/langhchainPII
Here’s a snippet of how I set it up:
portkey_config = {
“after_request_hooks”: [
{
“id”: “your-config-id-here”
}
]
}
llm = ChatOpenAI(
base_url=PORTKEY_GATEWAY_URL,
model=”gpt-3.5-turbo”,
default_headers=createHeaders(
provider=”openai”,
api_key=”Your_API_Key”,
config=portkey_config,
)
)
“`
The bot successfully blocks PII from being processed or repeated back It maintains natural conversation flow while prioritizing data security Performance impact is minimal – you can have security without sacrificing speed
You can run it locally or in Colab to see the PII protection in action. I would love to know your feedback
submitted by /u/Pitiful_Yak_390
[link] [comments]