Phi-3 Pro Shampoo Analyzer

Fine-tuned version of Microsoft's Phi-3.5-mini-instruct model trained to analyze shampoo ingredients for safety and compatibility.

Model Details

  • Base model: Microsoft/Phi-3.5-mini-instruct
  • Training method: LoRA (8-bit quantization)
  • Training dataset: Custom dataset with 75 shampoo analysis examples
  • Final training loss: 0.608300
  • Parameters: 3.8B parameters (base model) with ~25M trainable parameters (LoRA)

Use Cases

This model is designed to:

  • Analyze shampoo ingredient lists for safety concerns
  • Explain potential interactions between ingredients
  • Identify potentially harmful ingredients for different hair types
  • Provide information about specific ingredients' functions and benefits

Limitations

  • The model's knowledge is based on training data and may not cover all ingredients
  • Always consult with a dermatologist for serious concerns
  • Medical advice provided by the model should be verified by professionals

Inference API Usage

You can use this model directly via the Hugging Face Inference API:

import requests

API_URL = "/static-proxy?url=https%3A%2F%2Fapi-inference.huggingface.co%2Fmodels%2Fwmounger%2Fphi-3-pro-shampoo-analyzer"
headers = {"Authorization": "Bearer YOUR_HF_TOKEN"}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

prompt = '''<|system|>
You are a helpful assistant that analyzes shampoo ingredients for safety and compatibility.
<|user|>
Is sodium lauryl sulfate safe to use in shampoo?
<|assistant|>'''

output = query({
    "inputs": prompt,
    "parameters": {
        "max_new_tokens": 200,
        "temperature": 0.7,
        "top_p": 0.9
    }
})
print(output)

Local Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# Load the base model and tokenizer
base_model_id = "microsoft/Phi-3.5-mini-instruct"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# Load the fine-tuned LoRA adapters
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)
model = PeftModel.from_pretrained(model, "wmounger/phi-3-pro-shampoo-analyzer")

# Example prompt
prompt = '''<|system|>
You are a helpful assistant that analyzes shampoo ingredients for safety and compatibility.
<|user|>
Is sodium lauryl sulfate safe to use in shampoo?
<|assistant|>'''

# Generate response
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

The model was fine-tuned using:

  • 5 epochs
  • Learning rate: 1.5e-4 with cosine scheduler
  • 8-bit quantization with LoRA
  • Warmup ratio: 0.1
  • Batch size: 3 with gradient accumulation steps: 4
  • Approximately 30 training steps total
  • Final training loss: 0.608300
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for wmounger/phi-3-pro-shampoo-analyzer

Adapter
(677)
this model