Loading
I’ve been working on a project called PKE (Precision Knowledge Editing), an open-source method to improve the safety of LLMs by reducing toxic content generation without impacting their general performance. It works by identifying “toxic hotspots” in the model using neuron weight tracking and activation pathway tracing and modifying them through a custom loss function.
If you’re curious about the methodology and results, we’ve also published a paper detailing our approach and experimental findings. It includes comparisons with existing techniques like Detoxifying Instance Neuron Modification (DINM) and showcases PKE’s significant improvements in reducing the Attack Success Rate (ASR).
The project is open-source, and I’d love your feedback! The GitHub repo features a Jupyter Notebook that provides a hands-on demo of applying PKE to models like Meta-Llama-3-8B-Instruct: https://github.com/HydroXai/Enhancing-Safety-in-Large-Language-Models
If you’re interested in AI safety, I’d really appreciate your thoughts and suggestions. Thanks for checking it out!
submitted by /u/lial4415
[link] [comments]